Test Report: Docker_Linux_containerd_arm64 22344

                    
                      edd64449414ff518763defe8c5f2fdfa65b6a5d9:2025-12-27:43007
                    
                

Test fail (2/337)

Order failed test Duration
52 TestForceSystemdFlag 505.94
53 TestForceSystemdEnv 507.52
x
+
TestForceSystemdFlag (505.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-310604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1227 09:11:47.165732    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-310604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m21.149585506s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-310604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-310604" primary control-plane node in "force-systemd-flag-310604" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:10:42.800135  204666 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:10:42.800310  204666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:10:42.800324  204666 out.go:374] Setting ErrFile to fd 2...
	I1227 09:10:42.800331  204666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:10:42.800714  204666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 09:10:42.801241  204666 out.go:368] Setting JSON to false
	I1227 09:10:42.802140  204666 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3196,"bootTime":1766823447,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:10:42.802232  204666 start.go:143] virtualization:  
	I1227 09:10:42.805730  204666 out.go:179] * [force-systemd-flag-310604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:10:42.808307  204666 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:10:42.808421  204666 notify.go:221] Checking for updates...
	I1227 09:10:42.814703  204666 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:10:42.817982  204666 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 09:10:42.821099  204666 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 09:10:42.824151  204666 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:10:42.827145  204666 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:10:42.830746  204666 config.go:182] Loaded profile config "force-systemd-env-145961": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:10:42.830898  204666 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:10:42.863134  204666 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:10:42.863319  204666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:10:42.918342  204666 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:10:42.908953528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:10:42.918445  204666 docker.go:319] overlay module found
	I1227 09:10:42.921686  204666 out.go:179] * Using the docker driver based on user configuration
	I1227 09:10:42.924651  204666 start.go:309] selected driver: docker
	I1227 09:10:42.924672  204666 start.go:928] validating driver "docker" against <nil>
	I1227 09:10:42.924685  204666 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:10:42.925399  204666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:10:43.013713  204666 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:10:42.997716009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:10:43.013872  204666 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:10:43.014115  204666 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:10:43.017181  204666 out.go:179] * Using Docker driver with root privileges
	I1227 09:10:43.020064  204666 cni.go:84] Creating CNI manager for ""
	I1227 09:10:43.020140  204666 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:10:43.020159  204666 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:10:43.020250  204666 start.go:353] cluster config:
	{Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1227 09:10:43.023468  204666 out.go:179] * Starting "force-systemd-flag-310604" primary control-plane node in "force-systemd-flag-310604" cluster
	I1227 09:10:43.026267  204666 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 09:10:43.029182  204666 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:10:43.032164  204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:10:43.032206  204666 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 09:10:43.032217  204666 cache.go:65] Caching tarball of preloaded images
	I1227 09:10:43.032253  204666 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:10:43.032309  204666 preload.go:251] Found /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 09:10:43.032319  204666 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 09:10:43.032459  204666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json ...
	I1227 09:10:43.032480  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json: {Name:mkbc9c01b6cdf50a409317d5cc6b1625281e0c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:43.051266  204666 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:10:43.051291  204666 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:10:43.051312  204666 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:10:43.051342  204666 start.go:360] acquireMachinesLock for force-systemd-flag-310604: {Name:mk07b16eff3a374cb7598dd22df6b68eafb28bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:10:43.051447  204666 start.go:364] duration metric: took 84.235µs to acquireMachinesLock for "force-systemd-flag-310604"
	I1227 09:10:43.051477  204666 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 09:10:43.051550  204666 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:10:43.055029  204666 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:10:43.055272  204666 start.go:159] libmachine.API.Create for "force-systemd-flag-310604" (driver="docker")
	I1227 09:10:43.055308  204666 client.go:173] LocalClient.Create starting
	I1227 09:10:43.055382  204666 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem
	I1227 09:10:43.055425  204666 main.go:144] libmachine: Decoding PEM data...
	I1227 09:10:43.055445  204666 main.go:144] libmachine: Parsing certificate...
	I1227 09:10:43.055497  204666 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem
	I1227 09:10:43.055523  204666 main.go:144] libmachine: Decoding PEM data...
	I1227 09:10:43.055539  204666 main.go:144] libmachine: Parsing certificate...
	I1227 09:10:43.055903  204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:10:43.071470  204666 cli_runner.go:211] docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:10:43.071558  204666 network_create.go:284] running [docker network inspect force-systemd-flag-310604] to gather additional debugging logs...
	I1227 09:10:43.071581  204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604
	W1227 09:10:43.087467  204666 cli_runner.go:211] docker network inspect force-systemd-flag-310604 returned with exit code 1
	I1227 09:10:43.087522  204666 network_create.go:287] error running [docker network inspect force-systemd-flag-310604]: docker network inspect force-systemd-flag-310604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-310604 not found
	I1227 09:10:43.087536  204666 network_create.go:289] output of [docker network inspect force-systemd-flag-310604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-310604 not found
	
	** /stderr **
	I1227 09:10:43.087649  204666 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:10:43.105322  204666 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3499bc401779 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:76:98:a8:d7:e7} reservation:<nil>}
	I1227 09:10:43.105737  204666 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c1260ea8a496 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:1e:3f:a3:f0:1f} reservation:<nil>}
	I1227 09:10:43.106114  204666 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5173b3fb685 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:6a:35:6e:4e:02} reservation:<nil>}
	I1227 09:10:43.106601  204666 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a15060}
	I1227 09:10:43.106630  204666 network_create.go:124] attempt to create docker network force-systemd-flag-310604 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:10:43.106687  204666 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-310604 force-systemd-flag-310604
	I1227 09:10:43.181323  204666 network_create.go:108] docker network force-systemd-flag-310604 192.168.76.0/24 created
	I1227 09:10:43.181368  204666 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-310604" container
	I1227 09:10:43.181450  204666 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:10:43.199791  204666 cli_runner.go:164] Run: docker volume create force-systemd-flag-310604 --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:10:43.217217  204666 oci.go:103] Successfully created a docker volume force-systemd-flag-310604
	I1227 09:10:43.217303  204666 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-310604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --entrypoint /usr/bin/test -v force-systemd-flag-310604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:10:43.768592  204666 oci.go:107] Successfully prepared a docker volume force-systemd-flag-310604
	I1227 09:10:43.768647  204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:10:43.768656  204666 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:10:43.768730  204666 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-310604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:10:47.941425  204666 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-310604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.172659446s)
	I1227 09:10:47.941459  204666 kic.go:203] duration metric: took 4.172798697s to extract preloaded images to volume ...
	W1227 09:10:47.941608  204666 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:10:47.941723  204666 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:10:48.016863  204666 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-310604 --name force-systemd-flag-310604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-310604 --network force-systemd-flag-310604 --ip 192.168.76.2 --volume force-systemd-flag-310604:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:10:48.339827  204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Running}}
	I1227 09:10:48.361703  204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
	I1227 09:10:48.386273  204666 cli_runner.go:164] Run: docker exec force-systemd-flag-310604 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:10:48.435149  204666 oci.go:144] the created container "force-systemd-flag-310604" has a running status.
	I1227 09:10:48.435183  204666 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa...
	I1227 09:10:48.595417  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:10:48.595508  204666 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:10:48.621694  204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
	I1227 09:10:48.646093  204666 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:10:48.646113  204666 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-310604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:10:48.702415  204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
	I1227 09:10:48.724275  204666 machine.go:94] provisionDockerMachine start ...
	I1227 09:10:48.724381  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:48.753127  204666 main.go:144] libmachine: Using SSH client type: native
	I1227 09:10:48.753463  204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1227 09:10:48.753473  204666 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:10:48.754067  204666 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48476->127.0.0.1:33043: read: connection reset by peer
	I1227 09:10:51.891685  204666 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-310604
	
	I1227 09:10:51.891708  204666 ubuntu.go:182] provisioning hostname "force-systemd-flag-310604"
	I1227 09:10:51.891772  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:51.909491  204666 main.go:144] libmachine: Using SSH client type: native
	I1227 09:10:51.909807  204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1227 09:10:51.909825  204666 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-310604 && echo "force-systemd-flag-310604" | sudo tee /etc/hostname
	I1227 09:10:52.057961  204666 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-310604
	
	I1227 09:10:52.058064  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:52.075700  204666 main.go:144] libmachine: Using SSH client type: native
	I1227 09:10:52.076053  204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1227 09:10:52.076078  204666 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-310604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-310604/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-310604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:10:52.217368  204666 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:10:52.217456  204666 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-2451/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-2451/.minikube}
	I1227 09:10:52.217491  204666 ubuntu.go:190] setting up certificates
	I1227 09:10:52.217534  204666 provision.go:84] configureAuth start
	I1227 09:10:52.217619  204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
	I1227 09:10:52.237744  204666 provision.go:143] copyHostCerts
	I1227 09:10:52.237795  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
	I1227 09:10:52.237833  204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem, removing ...
	I1227 09:10:52.237841  204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
	I1227 09:10:52.238083  204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem (1078 bytes)
	I1227 09:10:52.238190  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
	I1227 09:10:52.238504  204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem, removing ...
	I1227 09:10:52.238511  204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
	I1227 09:10:52.238894  204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem (1123 bytes)
	I1227 09:10:52.239000  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
	I1227 09:10:52.239017  204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem, removing ...
	I1227 09:10:52.239022  204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
	I1227 09:10:52.239052  204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem (1679 bytes)
	I1227 09:10:52.239110  204666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-310604 san=[127.0.0.1 192.168.76.2 force-systemd-flag-310604 localhost minikube]
	I1227 09:10:52.569945  204666 provision.go:177] copyRemoteCerts
	I1227 09:10:52.570044  204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:10:52.570093  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:52.587912  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:52.687698  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:10:52.687844  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:10:52.705320  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:10:52.705381  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:10:52.723327  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:10:52.723385  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:10:52.740566  204666 provision.go:87] duration metric: took 522.993586ms to configureAuth
	I1227 09:10:52.740592  204666 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:10:52.740766  204666 config.go:182] Loaded profile config "force-systemd-flag-310604": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:10:52.740780  204666 machine.go:97] duration metric: took 4.016481436s to provisionDockerMachine
	I1227 09:10:52.740787  204666 client.go:176] duration metric: took 9.685467552s to LocalClient.Create
	I1227 09:10:52.740816  204666 start.go:167] duration metric: took 9.685545363s to libmachine.API.Create "force-systemd-flag-310604"
	I1227 09:10:52.740827  204666 start.go:293] postStartSetup for "force-systemd-flag-310604" (driver="docker")
	I1227 09:10:52.740837  204666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:10:52.740910  204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:10:52.740954  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:52.757935  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:52.856170  204666 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:10:52.859510  204666 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:10:52.859542  204666 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:10:52.859553  204666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/addons for local assets ...
	I1227 09:10:52.859606  204666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/files for local assets ...
	I1227 09:10:52.859688  204666 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> 42882.pem in /etc/ssl/certs
	I1227 09:10:52.859699  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> /etc/ssl/certs/42882.pem
	I1227 09:10:52.859802  204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:10:52.867151  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /etc/ssl/certs/42882.pem (1708 bytes)
	I1227 09:10:52.884851  204666 start.go:296] duration metric: took 144.00855ms for postStartSetup
	I1227 09:10:52.885206  204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
	I1227 09:10:52.901828  204666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json ...
	I1227 09:10:52.902117  204666 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:10:52.902171  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:52.918960  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:53.021390  204666 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:10:53.026246  204666 start.go:128] duration metric: took 9.974681148s to createHost
	I1227 09:10:53.026316  204666 start.go:83] releasing machines lock for "force-systemd-flag-310604", held for 9.974853178s
	I1227 09:10:53.026407  204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
	I1227 09:10:53.043542  204666 ssh_runner.go:195] Run: cat /version.json
	I1227 09:10:53.043598  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:53.043860  204666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:10:53.043921  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:53.061875  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:53.068175  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:53.255401  204666 ssh_runner.go:195] Run: systemctl --version
	I1227 09:10:53.262139  204666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:10:53.266534  204666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:10:53.266627  204666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:10:53.295238  204666 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:10:53.295259  204666 start.go:496] detecting cgroup driver to use...
	I1227 09:10:53.295273  204666 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:10:53.295340  204666 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 09:10:53.310658  204666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:10:53.324980  204666 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:10:53.325045  204666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:10:53.342693  204666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:10:53.361786  204666 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:10:53.481591  204666 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:10:53.609612  204666 docker.go:234] disabling docker service ...
	I1227 09:10:53.609677  204666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:10:53.632809  204666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:10:53.646556  204666 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:10:53.776893  204666 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:10:53.893803  204666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:10:53.906923  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:10:53.921921  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:10:53.930787  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:10:53.940192  204666 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:10:53.940311  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:10:53.949596  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:10:53.959130  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:10:53.967866  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:10:53.977401  204666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:10:53.985565  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:10:53.994878  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:10:54.004397  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:10:54.016162  204666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:10:54.025513  204666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:10:54.034319  204666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:10:54.150756  204666 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:10:54.285989  204666 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 09:10:54.286115  204666 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 09:10:54.290075  204666 start.go:574] Will wait 60s for crictl version
	I1227 09:10:54.290185  204666 ssh_runner.go:195] Run: which crictl
	I1227 09:10:54.293949  204666 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:10:54.321666  204666 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 09:10:54.321783  204666 ssh_runner.go:195] Run: containerd --version
	I1227 09:10:54.345867  204666 ssh_runner.go:195] Run: containerd --version
	I1227 09:10:54.376785  204666 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 09:10:54.379751  204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:10:54.401792  204666 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:10:54.406481  204666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:10:54.416271  204666 kubeadm.go:884] updating cluster {Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:10:54.416393  204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:10:54.416457  204666 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:10:54.444036  204666 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 09:10:54.444061  204666 containerd.go:542] Images already preloaded, skipping extraction
	I1227 09:10:54.444118  204666 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:10:54.485541  204666 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 09:10:54.485561  204666 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:10:54.485569  204666 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1227 09:10:54.485974  204666 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-310604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:10:54.486092  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 09:10:54.526429  204666 cni.go:84] Creating CNI manager for ""
	I1227 09:10:54.526503  204666 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:10:54.526540  204666 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:10:54.526596  204666 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-310604 NodeName:force-systemd-flag-310604 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:10:54.526756  204666 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-310604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:10:54.526867  204666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:10:54.534776  204666 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:10:54.534862  204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:10:54.542666  204666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1227 09:10:54.555276  204666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:10:54.568252  204666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1227 09:10:54.581175  204666 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:10:54.584678  204666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:10:54.594342  204666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:10:54.722742  204666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:10:54.739944  204666 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604 for IP: 192.168.76.2
	I1227 09:10:54.739989  204666 certs.go:195] generating shared ca certs ...
	I1227 09:10:54.740005  204666 certs.go:227] acquiring lock for ca certs: {Name:mk774ac921aa16ecd5f2d791fd87948cd01f1dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:54.740163  204666 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key
	I1227 09:10:54.740222  204666 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key
	I1227 09:10:54.740235  204666 certs.go:257] generating profile certs ...
	I1227 09:10:54.740300  204666 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key
	I1227 09:10:54.740327  204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt with IP's: []
	I1227 09:10:54.883927  204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt ...
	I1227 09:10:54.883962  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt: {Name:mkaf7a59941c35faf8629e9c6734e607330f0676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:54.884180  204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key ...
	I1227 09:10:54.884200  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key: {Name:mk15fe73d8be76bfb61d2cf22a9a54c4980a1213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:54.884320  204666 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c
	I1227 09:10:54.884341  204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:10:55.261500  204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c ...
	I1227 09:10:55.261538  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c: {Name:mkd8a84348a7ab947593ad31a2bf6eac08baadd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:55.261722  204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c ...
	I1227 09:10:55.261739  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c: {Name:mk0b1844eb49c1d885fbeaa194740cfbf0f66c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:55.261815  204666 certs.go:382] copying /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c -> /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt
	I1227 09:10:55.261907  204666 certs.go:386] copying /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c -> /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key
	I1227 09:10:55.261975  204666 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key
	I1227 09:10:55.261997  204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt with IP's: []
	I1227 09:10:55.489265  204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt ...
	I1227 09:10:55.489301  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt: {Name:mkd9f18caf462c3a8d2a28c4ddec386f0dbd816a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:55.489549  204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key ...
	I1227 09:10:55.489567  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key: {Name:mk9ff37441688b65bb6af030e9075e756fa5b4e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:55.489687  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:10:55.489718  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:10:55.489742  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:10:55.489765  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:10:55.489782  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:10:55.489806  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:10:55.489826  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:10:55.489837  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:10:55.489910  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem (1338 bytes)
	W1227 09:10:55.489959  204666 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288_empty.pem, impossibly tiny 0 bytes
	I1227 09:10:55.489975  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:10:55.490010  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:10:55.490045  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:10:55.490073  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem (1679 bytes)
	I1227 09:10:55.490121  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem (1708 bytes)
	I1227 09:10:55.490158  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.490176  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem -> /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.490197  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.490797  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:10:55.520180  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1227 09:10:55.539728  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:10:55.558726  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:10:55.577125  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:10:55.595030  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:10:55.612583  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:10:55.629890  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:10:55.647395  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:10:55.664281  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem --> /usr/share/ca-certificates/4288.pem (1338 bytes)
	I1227 09:10:55.682209  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /usr/share/ca-certificates/42882.pem (1708 bytes)
	I1227 09:10:55.699375  204666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:10:55.713225  204666 ssh_runner.go:195] Run: openssl version
	I1227 09:10:55.719549  204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.726782  204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42882.pem /etc/ssl/certs/42882.pem
	I1227 09:10:55.734088  204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.737803  204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:34 /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.737867  204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.779013  204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:10:55.786846  204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42882.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:10:55.794882  204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.802676  204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:10:55.810367  204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.814525  204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.814592  204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.856125  204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:10:55.863440  204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:10:55.870807  204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.877797  204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4288.pem /etc/ssl/certs/4288.pem
	I1227 09:10:55.885325  204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.889003  204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:34 /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.889078  204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.930128  204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:10:55.937477  204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4288.pem /etc/ssl/certs/51391683.0
	I1227 09:10:55.944699  204666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:10:55.948214  204666 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:10:55.948267  204666 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:10:55.948345  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 09:10:55.948412  204666 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:10:55.985105  204666 cri.go:96] found id: ""
	I1227 09:10:55.985202  204666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:10:55.994392  204666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:10:56.002476  204666 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:10:56.002588  204666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:10:56.013561  204666 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:10:56.013641  204666 kubeadm.go:158] found existing configuration files:
	
	I1227 09:10:56.013734  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:10:56.026163  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:10:56.026252  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:10:56.034452  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:10:56.042951  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:10:56.043043  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:10:56.051250  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:10:56.059162  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:10:56.059229  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:10:56.066603  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:10:56.074518  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:10:56.074592  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:10:56.081945  204666 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:10:56.121942  204666 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:10:56.122047  204666 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:10:56.212923  204666 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:10:56.213040  204666 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:10:56.213099  204666 kubeadm.go:319] OS: Linux
	I1227 09:10:56.213162  204666 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:10:56.213227  204666 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:10:56.213298  204666 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:10:56.213364  204666 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:10:56.213434  204666 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:10:56.213512  204666 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:10:56.213583  204666 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:10:56.213655  204666 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:10:56.213718  204666 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:10:56.276575  204666 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:10:56.276758  204666 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:10:56.276888  204666 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:10:56.284403  204666 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:10:56.290757  204666 out.go:252]   - Generating certificates and keys ...
	I1227 09:10:56.290854  204666 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:10:56.290926  204666 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:10:56.622516  204666 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:10:57.129861  204666 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:10:57.426106  204666 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:10:57.593509  204666 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:10:57.874524  204666 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:10:57.874936  204666 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:10:58.122828  204666 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:10:58.123152  204666 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:10:58.265970  204666 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:10:58.561360  204666 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:10:58.701478  204666 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:10:58.701573  204666 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:10:58.886739  204666 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:10:59.201465  204666 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:11:00.021317  204666 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:11:00.354783  204666 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:11:00.706525  204666 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:11:00.707614  204666 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:11:00.710676  204666 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:11:00.714234  204666 out.go:252]   - Booting up control plane ...
	I1227 09:11:00.714348  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:11:00.714433  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:11:00.720333  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:11:00.746371  204666 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:11:00.746513  204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:11:00.754160  204666 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:11:00.754510  204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:11:00.754557  204666 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:11:00.882317  204666 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:11:00.882439  204666 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:15:00.883060  204666 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001055113s
	I1227 09:15:00.883095  204666 kubeadm.go:319] 
	I1227 09:15:00.883153  204666 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:15:00.883192  204666 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:15:00.883301  204666 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:15:00.883311  204666 kubeadm.go:319] 
	I1227 09:15:00.883416  204666 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:15:00.883451  204666 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:15:00.883488  204666 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:15:00.883494  204666 kubeadm.go:319] 
	I1227 09:15:00.894305  204666 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:15:00.894751  204666 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:15:00.894868  204666 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:15:00.895118  204666 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 09:15:00.895131  204666 kubeadm.go:319] 
	I1227 09:15:00.895203  204666 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 09:15:00.895331  204666 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001055113s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001055113s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 09:15:00.895433  204666 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1227 09:15:01.313761  204666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:15:01.333363  204666 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:15:01.333466  204666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:15:01.342629  204666 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:15:01.342662  204666 kubeadm.go:158] found existing configuration files:
	
	I1227 09:15:01.342749  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:15:01.353052  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:15:01.353146  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:15:01.361396  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:15:01.369967  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:15:01.370034  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:15:01.378378  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:15:01.387663  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:15:01.387748  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:15:01.396344  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:15:01.405204  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:15:01.405270  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:15:01.413447  204666 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:15:01.463956  204666 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:15:01.464308  204666 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:15:01.552614  204666 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:15:01.552773  204666 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:15:01.552857  204666 kubeadm.go:319] OS: Linux
	I1227 09:15:01.552946  204666 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:15:01.553026  204666 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:15:01.553108  204666 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:15:01.553189  204666 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:15:01.553273  204666 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:15:01.553355  204666 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:15:01.553433  204666 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:15:01.553518  204666 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:15:01.553597  204666 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:15:01.623916  204666 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:15:01.624121  204666 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:15:01.624266  204666 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:15:01.629993  204666 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:15:01.633473  204666 out.go:252]   - Generating certificates and keys ...
	I1227 09:15:01.633564  204666 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:15:01.633648  204666 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:15:01.633732  204666 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 09:15:01.633816  204666 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 09:15:01.633931  204666 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 09:15:01.634153  204666 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 09:15:01.634509  204666 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 09:15:01.634871  204666 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 09:15:01.635227  204666 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 09:15:01.635557  204666 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 09:15:01.635839  204666 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 09:15:01.635902  204666 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:15:02.134928  204666 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:15:02.350950  204666 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:15:02.446843  204666 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:15:02.770471  204666 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:15:03.012958  204666 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:15:03.014723  204666 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:15:03.017019  204666 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:15:03.020102  204666 out.go:252]   - Booting up control plane ...
	I1227 09:15:03.020229  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:15:03.020308  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:15:03.022628  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:15:03.047367  204666 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:15:03.047562  204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:15:03.054984  204666 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:15:03.055427  204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:15:03.055683  204666 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:15:03.202781  204666 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:15:03.202915  204666 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:19:03.204484  204666 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000488648s
	I1227 09:19:03.204509  204666 kubeadm.go:319] 
	I1227 09:19:03.204566  204666 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:19:03.204600  204666 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:19:03.204705  204666 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:19:03.204710  204666 kubeadm.go:319] 
	I1227 09:19:03.204814  204666 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:19:03.204846  204666 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:19:03.204877  204666 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:19:03.204881  204666 kubeadm.go:319] 
	I1227 09:19:03.217785  204666 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:19:03.218533  204666 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:19:03.218725  204666 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:19:03.219191  204666 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 09:19:03.219198  204666 kubeadm.go:319] 
	I1227 09:19:03.219319  204666 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 09:19:03.219385  204666 kubeadm.go:403] duration metric: took 8m7.271122438s to StartCluster
	I1227 09:19:03.219439  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:19:03.219506  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:19:03.293507  204666 cri.go:96] found id: ""
	I1227 09:19:03.293587  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.293612  204666 logs.go:284] No container was found matching "kube-apiserver"
	I1227 09:19:03.293653  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 09:19:03.293737  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:19:03.328940  204666 cri.go:96] found id: ""
	I1227 09:19:03.328973  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.328982  204666 logs.go:284] No container was found matching "etcd"
	I1227 09:19:03.328990  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 09:19:03.329064  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:19:03.374166  204666 cri.go:96] found id: ""
	I1227 09:19:03.374236  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.374260  204666 logs.go:284] No container was found matching "coredns"
	I1227 09:19:03.374286  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:19:03.374375  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:19:03.422359  204666 cri.go:96] found id: ""
	I1227 09:19:03.422395  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.422405  204666 logs.go:284] No container was found matching "kube-scheduler"
	I1227 09:19:03.422411  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:19:03.422486  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:19:03.481975  204666 cri.go:96] found id: ""
	I1227 09:19:03.482015  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.482024  204666 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:19:03.482030  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:19:03.482095  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:19:03.538264  204666 cri.go:96] found id: ""
	I1227 09:19:03.538290  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.538300  204666 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 09:19:03.538307  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 09:19:03.538373  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:19:03.592079  204666 cri.go:96] found id: ""
	I1227 09:19:03.592102  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.592110  204666 logs.go:284] No container was found matching "kindnet"
	I1227 09:19:03.592121  204666 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:19:03.592134  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:19:03.692446  204666 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:19:03.683947    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.684806    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.686564    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.686877    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.688421    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 09:19:03.683947    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.684806    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.686564    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.686877    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.688421    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:19:03.692475  204666 logs.go:123] Gathering logs for containerd ...
	I1227 09:19:03.692487  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 09:19:03.740848  204666 logs.go:123] Gathering logs for container status ...
	I1227 09:19:03.740925  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:19:03.782208  204666 logs.go:123] Gathering logs for kubelet ...
	I1227 09:19:03.782242  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 09:19:03.874946  204666 logs.go:123] Gathering logs for dmesg ...
	I1227 09:19:03.874978  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1227 09:19:03.889356  204666 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000488648s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 09:19:03.889407  204666 out.go:285] * 
	* 
	W1227 09:19:03.889455  204666 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000488648s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000488648s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:19:03.889475  204666 out.go:285] * 
	* 
	W1227 09:19:03.889727  204666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:19:03.894675  204666 out.go:203] 
	W1227 09:19:03.897830  204666 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000488648s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000488648s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:19:03.897891  204666 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 09:19:03.897912  204666 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 09:19:03.901086  204666 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-310604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-310604 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 09:19:04.402615979 +0000 UTC m=+3062.674909962
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-310604
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-310604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7",
	        "Created": "2025-12-27T09:10:48.033403799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205114,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:10:48.111175804Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7/hosts",
	        "LogPath": "/var/lib/docker/containers/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7/47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7-json.log",
	        "Name": "/force-systemd-flag-310604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-310604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-310604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e3944629b13b566c43d396125a7b7ac492c412ed64d892c2c80ea5984054b7",
	                "LowerDir": "/var/lib/docker/overlay2/1832752730d9b80d56e5d1b2e667c033714fa736328e00cf7f25bfaac60db49d-init/diff:/var/lib/docker/overlay2/c2f1250c3b92b032a53152a31400b908e250d3d45594ebbf65fa51d032f3248a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1832752730d9b80d56e5d1b2e667c033714fa736328e00cf7f25bfaac60db49d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1832752730d9b80d56e5d1b2e667c033714fa736328e00cf7f25bfaac60db49d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1832752730d9b80d56e5d1b2e667c033714fa736328e00cf7f25bfaac60db49d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-310604",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-310604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-310604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-310604",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-310604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e96687a422df76ac3188f2c47b05adc19dd7f8be7690a9adb99feca2abb9143",
	            "SandboxKey": "/var/run/docker/netns/1e96687a422d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-310604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:54:c4:15:68:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "775f708a5f85a72bd8cf9cd7fbcfcc4ed9e02d7cba71aadca90a595a328140fc",
	                    "EndpointID": "cf7c3500c9a0740602bf99c4058772b58dc8eefa4147300a2aeaa8438e4cd2e7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-310604",
	                        "47e3944629b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-310604 -n force-systemd-flag-310604
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-310604 -n force-systemd-flag-310604: exit status 6 (329.823587ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:19:04.743339  230453 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-310604" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-310604 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p force-systemd-flag-310604 logs -n 25: (1.157040968s)
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ delete  │ -p force-systemd-env-145961                                                                                                                                                                                                                         │ force-systemd-env-145961  │ jenkins │ v1.37.0 │ 27 Dec 25 09:13 UTC │ 27 Dec 25 09:13 UTC │
	│ start   │ -p cert-options-229858 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-229858       │ jenkins │ v1.37.0 │ 27 Dec 25 09:13 UTC │ 27 Dec 25 09:14 UTC │
	│ ssh     │ cert-options-229858 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-229858       │ jenkins │ v1.37.0 │ 27 Dec 25 09:14 UTC │ 27 Dec 25 09:14 UTC │
	│ ssh     │ -p cert-options-229858 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-229858       │ jenkins │ v1.37.0 │ 27 Dec 25 09:14 UTC │ 27 Dec 25 09:14 UTC │
	│ delete  │ -p cert-options-229858                                                                                                                                                                                                                              │ cert-options-229858       │ jenkins │ v1.37.0 │ 27 Dec 25 09:14 UTC │ 27 Dec 25 09:14 UTC │
	│ start   │ -p old-k8s-version-046838 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:14 UTC │ 27 Dec 25 09:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-046838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:15 UTC │
	│ stop    │ -p old-k8s-version-046838 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-046838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:15 UTC │
	│ start   │ -p old-k8s-version-046838 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:15 UTC │ 27 Dec 25 09:16 UTC │
	│ image   │ old-k8s-version-046838 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
	│ pause   │ -p old-k8s-version-046838 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
	│ unpause │ -p old-k8s-version-046838 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
	│ delete  │ -p old-k8s-version-046838                                                                                                                                                                                                                           │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
	│ delete  │ -p old-k8s-version-046838                                                                                                                                                                                                                           │ old-k8s-version-046838    │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:16 UTC │
	│ start   │ -p no-preload-524171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:16 UTC │ 27 Dec 25 09:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-524171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:17 UTC │ 27 Dec 25 09:17 UTC │
	│ stop    │ -p no-preload-524171 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:17 UTC │ 27 Dec 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-524171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:17 UTC │ 27 Dec 25 09:17 UTC │
	│ start   │ -p no-preload-524171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:17 UTC │ 27 Dec 25 09:18 UTC │
	│ image   │ no-preload-524171 image list --format=json                                                                                                                                                                                                          │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │ 27 Dec 25 09:18 UTC │
	│ pause   │ -p no-preload-524171 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │ 27 Dec 25 09:19 UTC │
	│ unpause │ -p no-preload-524171 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:19 UTC │ 27 Dec 25 09:19 UTC │
	│ delete  │ -p no-preload-524171                                                                                                                                                                                                                                │ no-preload-524171         │ jenkins │ v1.37.0 │ 27 Dec 25 09:19 UTC │                     │
	│ ssh     │ force-systemd-flag-310604 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-310604 │ jenkins │ v1.37.0 │ 27 Dec 25 09:19 UTC │ 27 Dec 25 09:19 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:17:58
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:17:58.400903  226201 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:17:58.401087  226201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:17:58.401114  226201 out.go:374] Setting ErrFile to fd 2...
	I1227 09:17:58.401134  226201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:17:58.401431  226201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 09:17:58.401848  226201 out.go:368] Setting JSON to false
	I1227 09:17:58.402717  226201 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3632,"bootTime":1766823447,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:17:58.402841  226201 start.go:143] virtualization:  
	I1227 09:17:58.407933  226201 out.go:179] * [no-preload-524171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:17:58.411055  226201 notify.go:221] Checking for updates...
	I1227 09:17:58.414165  226201 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:17:58.417195  226201 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:17:58.420122  226201 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 09:17:58.423131  226201 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 09:17:58.426044  226201 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:17:58.429148  226201 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:17:58.432664  226201 config.go:182] Loaded profile config "no-preload-524171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:17:58.433273  226201 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:17:58.459759  226201 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:17:58.459879  226201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:17:58.523524  226201 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:17:58.513905546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:17:58.523637  226201 docker.go:319] overlay module found
	I1227 09:17:58.526829  226201 out.go:179] * Using the docker driver based on existing profile
	I1227 09:17:58.529610  226201 start.go:309] selected driver: docker
	I1227 09:17:58.529636  226201 start.go:928] validating driver "docker" against &{Name:no-preload-524171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:17:58.529746  226201 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:17:58.530463  226201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:17:58.585148  226201 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:17:58.575093532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:17:58.585477  226201 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:17:58.585512  226201 cni.go:84] Creating CNI manager for ""
	I1227 09:17:58.585568  226201 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:17:58.585612  226201 start.go:353] cluster config:
	{Name:no-preload-524171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:17:58.590571  226201 out.go:179] * Starting "no-preload-524171" primary control-plane node in "no-preload-524171" cluster
	I1227 09:17:58.593397  226201 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 09:17:58.596516  226201 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:17:58.599394  226201 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:17:58.599475  226201 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:17:58.599543  226201 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/config.json ...
	I1227 09:17:58.599837  226201 cache.go:107] acquiring lock: {Name:mke1e922d7eb2a2676149298b5dba45833ae8879 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.599917  226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1227 09:17:58.599932  226201 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.033µs
	I1227 09:17:58.599952  226201 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1227 09:17:58.600044  226201 cache.go:107] acquiring lock: {Name:mk6507c82ba2441dda683a90107aed49c8f037b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.600091  226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I1227 09:17:58.600102  226201 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 139.128µs
	I1227 09:17:58.600109  226201 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I1227 09:17:58.600125  226201 cache.go:107] acquiring lock: {Name:mk8855e1f661e2dc77ec51f38d05c8826759bdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.600159  226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I1227 09:17:58.600169  226201 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 44.514µs
	I1227 09:17:58.600175  226201 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I1227 09:17:58.600184  226201 cache.go:107] acquiring lock: {Name:mk573ecca5f6c5e3847e355240d192409babe6a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.600221  226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I1227 09:17:58.600230  226201 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 47.073µs
	I1227 09:17:58.600238  226201 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I1227 09:17:58.600247  226201 cache.go:107] acquiring lock: {Name:mka17529d9dfa557ae96a2eab8e7ada7a86a0715 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.600275  226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I1227 09:17:58.600284  226201 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 37.383µs
	I1227 09:17:58.600290  226201 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I1227 09:17:58.600309  226201 cache.go:107] acquiring lock: {Name:mkb3bf38af1b254286e4b9cb77de8e4fb8511831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.600340  226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1227 09:17:58.600349  226201 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 49.995µs
	I1227 09:17:58.600355  226201 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1227 09:17:58.600364  226201 cache.go:107] acquiring lock: {Name:mk2b2169d020c5fd5946a8dee42079f4cde09f1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.600393  226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I1227 09:17:58.600405  226201 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 39.869µs
	I1227 09:17:58.600411  226201 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1227 09:17:58.600425  226201 cache.go:107] acquiring lock: {Name:mked1e3f89cbb58c53698baeb61b65b0654307c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.600456  226201 cache.go:115] /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1227 09:17:58.600464  226201 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.526µs
	I1227 09:17:58.600470  226201 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1227 09:17:58.600476  226201 cache.go:87] Successfully saved all images to host disk.
	I1227 09:17:58.621079  226201 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:17:58.621100  226201 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:17:58.621115  226201 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:17:58.621144  226201 start.go:360] acquireMachinesLock for no-preload-524171: {Name:mkf5fad8426c1227ad56bd7da91d15024fcf5f71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:17:58.621208  226201 start.go:364] duration metric: took 44.604µs to acquireMachinesLock for "no-preload-524171"
	I1227 09:17:58.621231  226201 start.go:96] Skipping create...Using existing machine configuration
	I1227 09:17:58.621241  226201 fix.go:54] fixHost starting: 
	I1227 09:17:58.621516  226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
	I1227 09:17:58.639396  226201 fix.go:112] recreateIfNeeded on no-preload-524171: state=Stopped err=<nil>
	W1227 09:17:58.639440  226201 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 09:17:58.644664  226201 out.go:252] * Restarting existing docker container for "no-preload-524171" ...
	I1227 09:17:58.644795  226201 cli_runner.go:164] Run: docker start no-preload-524171
	I1227 09:17:58.922378  226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
	I1227 09:17:58.944063  226201 kic.go:430] container "no-preload-524171" state is running.
	I1227 09:17:58.944458  226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-524171
	I1227 09:17:58.966641  226201 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/config.json ...
	I1227 09:17:58.966876  226201 machine.go:94] provisionDockerMachine start ...
	I1227 09:17:58.966935  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:17:58.994098  226201 main.go:144] libmachine: Using SSH client type: native
	I1227 09:17:58.994484  226201 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1227 09:17:58.994502  226201 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:17:58.996230  226201 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:18:02.139960  226201 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-524171
	
	I1227 09:18:02.140068  226201 ubuntu.go:182] provisioning hostname "no-preload-524171"
	I1227 09:18:02.140159  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:02.158238  226201 main.go:144] libmachine: Using SSH client type: native
	I1227 09:18:02.158569  226201 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1227 09:18:02.158581  226201 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-524171 && echo "no-preload-524171" | sudo tee /etc/hostname
	I1227 09:18:02.305841  226201 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-524171
	
	I1227 09:18:02.305952  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:02.324154  226201 main.go:144] libmachine: Using SSH client type: native
	I1227 09:18:02.324473  226201 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1227 09:18:02.324494  226201 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-524171' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-524171/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-524171' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:18:02.464447  226201 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:18:02.464549  226201 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-2451/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-2451/.minikube}
	I1227 09:18:02.464593  226201 ubuntu.go:190] setting up certificates
	I1227 09:18:02.464619  226201 provision.go:84] configureAuth start
	I1227 09:18:02.464701  226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-524171
	I1227 09:18:02.486454  226201 provision.go:143] copyHostCerts
	I1227 09:18:02.486533  226201 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem, removing ...
	I1227 09:18:02.486554  226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
	I1227 09:18:02.486634  226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem (1078 bytes)
	I1227 09:18:02.486740  226201 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem, removing ...
	I1227 09:18:02.486751  226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
	I1227 09:18:02.486779  226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem (1123 bytes)
	I1227 09:18:02.486837  226201 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem, removing ...
	I1227 09:18:02.486846  226201 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
	I1227 09:18:02.486871  226201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem (1679 bytes)
	I1227 09:18:02.486930  226201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem org=jenkins.no-preload-524171 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-524171]
	I1227 09:18:02.623993  226201 provision.go:177] copyRemoteCerts
	I1227 09:18:02.624053  226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:18:02.624097  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:02.641312  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:02.740716  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:18:02.759149  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 09:18:02.778722  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:18:02.796675  226201 provision.go:87] duration metric: took 332.020144ms to configureAuth
	I1227 09:18:02.796700  226201 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:18:02.796895  226201 config.go:182] Loaded profile config "no-preload-524171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:18:02.796902  226201 machine.go:97] duration metric: took 3.830019014s to provisionDockerMachine
	I1227 09:18:02.796910  226201 start.go:293] postStartSetup for "no-preload-524171" (driver="docker")
	I1227 09:18:02.796919  226201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:18:02.796962  226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:18:02.797001  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:02.815376  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:02.915954  226201 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:18:02.919376  226201 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:18:02.919403  226201 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:18:02.919415  226201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/addons for local assets ...
	I1227 09:18:02.919470  226201 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/files for local assets ...
	I1227 09:18:02.919550  226201 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> 42882.pem in /etc/ssl/certs
	I1227 09:18:02.919661  226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:18:02.927306  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /etc/ssl/certs/42882.pem (1708 bytes)
	I1227 09:18:02.944850  226201 start.go:296] duration metric: took 147.9256ms for postStartSetup
	I1227 09:18:02.944984  226201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:18:02.945030  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:02.962355  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:03.061913  226201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:18:03.066978  226201 fix.go:56] duration metric: took 4.445730661s for fixHost
	I1227 09:18:03.067020  226201 start.go:83] releasing machines lock for "no-preload-524171", held for 4.445785628s
	I1227 09:18:03.067101  226201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-524171
	I1227 09:18:03.085169  226201 ssh_runner.go:195] Run: cat /version.json
	I1227 09:18:03.085206  226201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:18:03.085229  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:03.085263  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:03.111386  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:03.112725  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:03.215862  226201 ssh_runner.go:195] Run: systemctl --version
	I1227 09:18:03.312166  226201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:18:03.316853  226201 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:18:03.316955  226201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:18:03.325177  226201 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 09:18:03.325245  226201 start.go:496] detecting cgroup driver to use...
	I1227 09:18:03.325303  226201 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 09:18:03.325367  226201 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 09:18:03.343327  226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:18:03.357022  226201 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:18:03.357103  226201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:18:03.373855  226201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:18:03.387118  226201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:18:03.527687  226201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:18:03.635245  226201 docker.go:234] disabling docker service ...
	I1227 09:18:03.635357  226201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:18:03.650672  226201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:18:03.664005  226201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:18:03.777913  226201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:18:03.889853  226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:18:03.902485  226201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:18:03.916709  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:18:03.925433  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:18:03.934123  226201 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1227 09:18:03.934217  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1227 09:18:03.942911  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:18:03.951558  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:18:03.960253  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:18:03.968841  226201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:18:03.976733  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:18:03.985289  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:18:03.993999  226201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:18:04.003737  226201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:18:04.012398  226201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:18:04.021558  226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:18:04.129037  226201 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:18:04.301734  226201 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 09:18:04.301805  226201 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 09:18:04.305816  226201 start.go:574] Will wait 60s for crictl version
	I1227 09:18:04.305937  226201 ssh_runner.go:195] Run: which crictl
	I1227 09:18:04.309675  226201 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:18:04.333946  226201 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 09:18:04.334025  226201 ssh_runner.go:195] Run: containerd --version
	I1227 09:18:04.353784  226201 ssh_runner.go:195] Run: containerd --version
	I1227 09:18:04.376780  226201 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 09:18:04.379820  226201 cli_runner.go:164] Run: docker network inspect no-preload-524171 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:18:04.395618  226201 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:18:04.399460  226201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:18:04.409376  226201 kubeadm.go:884] updating cluster {Name:no-preload-524171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:18:04.409501  226201 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:18:04.409563  226201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:18:04.438846  226201 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 09:18:04.438869  226201 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:18:04.438877  226201 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1227 09:18:04.438966  226201 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-524171 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:18:04.439031  226201 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 09:18:04.465516  226201 cni.go:84] Creating CNI manager for ""
	I1227 09:18:04.465585  226201 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:18:04.465635  226201 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:18:04.465699  226201 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-524171 NodeName:no-preload-524171 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:18:04.465865  226201 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-524171"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:18:04.465979  226201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:18:04.473830  226201 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:18:04.473907  226201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:18:04.481422  226201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1227 09:18:04.493700  226201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:18:04.506726  226201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2250 bytes)
	I1227 09:18:04.519456  226201 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:18:04.523030  226201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:18:04.532996  226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:18:04.638155  226201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:18:04.655671  226201 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171 for IP: 192.168.85.2
	I1227 09:18:04.655692  226201 certs.go:195] generating shared ca certs ...
	I1227 09:18:04.655708  226201 certs.go:227] acquiring lock for ca certs: {Name:mk774ac921aa16ecd5f2d791fd87948cd01f1dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:18:04.655867  226201 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key
	I1227 09:18:04.655908  226201 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key
	I1227 09:18:04.655915  226201 certs.go:257] generating profile certs ...
	I1227 09:18:04.656032  226201 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/client.key
	I1227 09:18:04.656084  226201 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/apiserver.key.580fb977
	I1227 09:18:04.656125  226201 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/proxy-client.key
	I1227 09:18:04.656234  226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem (1338 bytes)
	W1227 09:18:04.656264  226201 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288_empty.pem, impossibly tiny 0 bytes
	I1227 09:18:04.656271  226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:18:04.656303  226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:18:04.656325  226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:18:04.656352  226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem (1679 bytes)
	I1227 09:18:04.656393  226201 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem (1708 bytes)
	I1227 09:18:04.657021  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:18:04.677523  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1227 09:18:04.694756  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:18:04.712555  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:18:04.729518  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1227 09:18:04.749526  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:18:04.767207  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:18:04.792023  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:18:04.812544  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:18:04.834434  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem --> /usr/share/ca-certificates/4288.pem (1338 bytes)
	I1227 09:18:04.861056  226201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /usr/share/ca-certificates/42882.pem (1708 bytes)
	I1227 09:18:04.880555  226201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:18:04.909584  226201 ssh_runner.go:195] Run: openssl version
	I1227 09:18:04.916024  226201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:18:04.923337  226201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:18:04.934637  226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:18:04.940662  226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:18:04.940766  226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:18:04.985337  226201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:18:04.992974  226201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4288.pem
	I1227 09:18:05.002332  226201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4288.pem /etc/ssl/certs/4288.pem
	I1227 09:18:05.012484  226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4288.pem
	I1227 09:18:05.018305  226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:34 /usr/share/ca-certificates/4288.pem
	I1227 09:18:05.018428  226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4288.pem
	I1227 09:18:05.061382  226201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:18:05.068981  226201 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42882.pem
	I1227 09:18:05.076512  226201 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42882.pem /etc/ssl/certs/42882.pem
	I1227 09:18:05.084139  226201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42882.pem
	I1227 09:18:05.088106  226201 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:34 /usr/share/ca-certificates/42882.pem
	I1227 09:18:05.088170  226201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42882.pem
	I1227 09:18:05.135384  226201 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:18:05.143220  226201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:18:05.147353  226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 09:18:05.189042  226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 09:18:05.232533  226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 09:18:05.275982  226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 09:18:05.318404  226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 09:18:05.366509  226201 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 09:18:05.435816  226201 kubeadm.go:401] StartCluster: {Name:no-preload-524171 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-524171 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:18:05.435909  226201 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 09:18:05.436000  226201 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:18:05.485146  226201 cri.go:96] found id: "2ae8d3c66d1207c0eebd2f380dead182121d0e824ac82c8c0009b723c8c4282c"
	I1227 09:18:05.485165  226201 cri.go:96] found id: "5aa63df08b73e414d87e2974739bc0f7be6a4215d8262e879f3bc63a59ccce8a"
	I1227 09:18:05.485169  226201 cri.go:96] found id: "f124ffe1987aac2609cef749803af5ddf75469757a908e6075b30c6d3170943b"
	I1227 09:18:05.485173  226201 cri.go:96] found id: "38065bf9d52701ed4b1494dcd439b8948889c7df7e86e543b37681459e2dbf0c"
	I1227 09:18:05.485179  226201 cri.go:96] found id: "3a11c7ef8307a5efe705e860ee3d142f3aad4834ee624e52c2fe7b6d4da29f36"
	I1227 09:18:05.485182  226201 cri.go:96] found id: "4738b282a7ad923daf8903fb7015e488c518676001de17bf9e718e7cafe628da"
	I1227 09:18:05.485186  226201 cri.go:96] found id: "fc42941264da4d0e2ee7d00a5a1374b1b12d5b77f41d0e50586fc4c6481e6df6"
	I1227 09:18:05.485189  226201 cri.go:96] found id: "751aa8ea3c05e00083a87550345bfebe9f06f30ec6aa59634b1b0f573ef9653f"
	I1227 09:18:05.485192  226201 cri.go:96] found id: ""
	I1227 09:18:05.485249  226201 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1227 09:18:05.509416  226201 cri.go:123] JSON = [{"ociVersion":"1.2.1","id":"0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779","pid":864,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779/rootfs","created":"2025-12-27T09:18:05.432228982Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-524171_62054dbbefd9c8741e5c32bf10947cc5","io.kubernetes.cri.sandbox-memory":"0","io
.kubernetes.cri.sandbox-name":"etcd-no-preload-524171","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"62054dbbefd9c8741e5c32bf10947cc5"},"owner":"root"},{"ociVersion":"1.2.1","id":"16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659","pid":903,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659","io.kubernetes.cri.sandbox-log-directory":"/va
r/log/pods/kube-system_kube-controller-manager-no-preload-524171_e0876fb181906b5451fe5348bc79cc69","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-524171","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e0876fb181906b5451fe5348bc79cc69"},"owner":"root"}]
	I1227 09:18:05.509511  226201 cri.go:133] list returned 2 containers
	I1227 09:18:05.509527  226201 cri.go:136] container: {ID:0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779 Status:running}
	I1227 09:18:05.509549  226201 cri.go:138] skipping 0c1c6df4c1bf3cc8a81e4004f9ef7217c54ba9778fb66bdc5c76d81150a25779 - not in ps
	I1227 09:18:05.509554  226201 cri.go:136] container: {ID:16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659 Status:created}
	I1227 09:18:05.509559  226201 cri.go:138] skipping 16411915039ff46130f92e0dd369bc617ebf156d18c8e18cc5151586e765f659 - not in ps
	I1227 09:18:05.509609  226201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:18:05.527100  226201 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 09:18:05.527126  226201 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 09:18:05.527211  226201 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 09:18:05.543807  226201 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 09:18:05.544234  226201 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-524171" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 09:18:05.544333  226201 kubeconfig.go:62] /home/jenkins/minikube-integration/22344-2451/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-524171" cluster setting kubeconfig missing "no-preload-524171" context setting]
	I1227 09:18:05.544597  226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/kubeconfig: {Name:mke3c6b6762542ff27940478b7eeb947283979c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:18:05.545813  226201 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 09:18:05.572130  226201 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 09:18:05.572165  226201 kubeadm.go:602] duration metric: took 45.033003ms to restartPrimaryControlPlane
	I1227 09:18:05.572175  226201 kubeadm.go:403] duration metric: took 136.372256ms to StartCluster
	I1227 09:18:05.572190  226201 settings.go:142] acquiring lock: {Name:mk6f44443555e6cff1da53c787c3ea2c729d418d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:18:05.572285  226201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 09:18:05.572894  226201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/kubeconfig: {Name:mke3c6b6762542ff27940478b7eeb947283979c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:18:05.573092  226201 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 09:18:05.573450  226201 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 09:18:05.573519  226201 addons.go:70] Setting storage-provisioner=true in profile "no-preload-524171"
	I1227 09:18:05.573531  226201 addons.go:239] Setting addon storage-provisioner=true in "no-preload-524171"
	W1227 09:18:05.573536  226201 addons.go:248] addon storage-provisioner should already be in state true
	I1227 09:18:05.573556  226201 host.go:66] Checking if "no-preload-524171" exists ...
	I1227 09:18:05.574033  226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
	I1227 09:18:05.574653  226201 config.go:182] Loaded profile config "no-preload-524171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:18:05.574745  226201 addons.go:70] Setting default-storageclass=true in profile "no-preload-524171"
	I1227 09:18:05.574786  226201 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-524171"
	I1227 09:18:05.575066  226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
	I1227 09:18:05.575561  226201 addons.go:70] Setting dashboard=true in profile "no-preload-524171"
	I1227 09:18:05.575580  226201 addons.go:239] Setting addon dashboard=true in "no-preload-524171"
	W1227 09:18:05.575587  226201 addons.go:248] addon dashboard should already be in state true
	I1227 09:18:05.575609  226201 host.go:66] Checking if "no-preload-524171" exists ...
	I1227 09:18:05.576247  226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
	I1227 09:18:05.577920  226201 addons.go:70] Setting metrics-server=true in profile "no-preload-524171"
	I1227 09:18:05.577944  226201 addons.go:239] Setting addon metrics-server=true in "no-preload-524171"
	W1227 09:18:05.577952  226201 addons.go:248] addon metrics-server should already be in state true
	I1227 09:18:05.578071  226201 host.go:66] Checking if "no-preload-524171" exists ...
	I1227 09:18:05.578193  226201 out.go:179] * Verifying Kubernetes components...
	I1227 09:18:05.579893  226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
	I1227 09:18:05.590828  226201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:18:05.638488  226201 addons.go:239] Setting addon default-storageclass=true in "no-preload-524171"
	W1227 09:18:05.638517  226201 addons.go:248] addon default-storageclass should already be in state true
	I1227 09:18:05.638556  226201 host.go:66] Checking if "no-preload-524171" exists ...
	I1227 09:18:05.640452  226201 cli_runner.go:164] Run: docker container inspect no-preload-524171 --format={{.State.Status}}
	I1227 09:18:05.647608  226201 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 09:18:05.650099  226201 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 09:18:05.653138  226201 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1227 09:18:05.653260  226201 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:18:05.653271  226201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 09:18:05.653331  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:05.659205  226201 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1227 09:18:05.659231  226201 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1227 09:18:05.659314  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:05.659401  226201 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 09:18:05.670265  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 09:18:05.670294  226201 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 09:18:05.670400  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:05.696289  226201 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 09:18:05.696310  226201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 09:18:05.696374  226201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-524171
	I1227 09:18:05.715258  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:05.742406  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:05.752077  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:05.752578  226201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/no-preload-524171/id_rsa Username:docker}
	I1227 09:18:05.891456  226201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:18:05.955950  226201 node_ready.go:35] waiting up to 6m0s for node "no-preload-524171" to be "Ready" ...
	I1227 09:18:06.005569  226201 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1227 09:18:06.005646  226201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1227 09:18:06.092472  226201 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1227 09:18:06.092567  226201 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1227 09:18:06.141249  226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:18:06.168253  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 09:18:06.168330  226201 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 09:18:06.182987  226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 09:18:06.197144  226201 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 09:18:06.197169  226201 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1227 09:18:06.354968  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 09:18:06.354993  226201 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 09:18:06.373252  226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1227 09:18:06.435777  226201 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1227 09:18:06.435880  226201 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1227 09:18:06.488362  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 09:18:06.488438  226201 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 09:18:06.646252  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 09:18:06.646325  226201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 09:18:06.704346  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 09:18:06.704421  226201 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 09:18:06.748111  226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 09:18:06.752904  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 09:18:06.752965  226201 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 09:18:06.825777  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 09:18:06.825853  226201 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 09:18:06.917894  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 09:18:06.917967  226201 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 09:18:06.976469  226201 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:18:06.976541  226201 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 09:18:07.014820  226201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 09:18:09.246739  226201 node_ready.go:49] node "no-preload-524171" is "Ready"
	I1227 09:18:09.246767  226201 node_ready.go:38] duration metric: took 3.290765949s for node "no-preload-524171" to be "Ready" ...
	I1227 09:18:09.246781  226201 api_server.go:52] waiting for apiserver process to appear ...
	I1227 09:18:09.246843  226201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:18:09.433975  226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.250900421s)
	I1227 09:18:11.836267  226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.462926205s)
	I1227 09:18:11.836298  226201 addons.go:495] Verifying addon metrics-server=true in "no-preload-524171"
	I1227 09:18:11.906063  226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.157854122s)
	I1227 09:18:11.906186  226201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.891291045s)
	I1227 09:18:11.906360  226201 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.659505443s)
	I1227 09:18:11.906375  226201 api_server.go:72] duration metric: took 6.333252631s to wait for apiserver process to appear ...
	I1227 09:18:11.906381  226201 api_server.go:88] waiting for apiserver healthz status ...
	I1227 09:18:11.906398  226201 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:18:11.909821  226201 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-524171 addons enable metrics-server
	
	I1227 09:18:11.912808  226201 out.go:179] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1227 09:18:11.914800  226201 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:18:11.914827  226201 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:18:11.916116  226201 addons.go:530] duration metric: took 6.342664448s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1227 09:18:12.407142  226201 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:18:12.415532  226201 api_server.go:325] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1227 09:18:12.415560  226201 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1227 09:18:12.907142  226201 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 09:18:12.915201  226201 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 09:18:12.916442  226201 api_server.go:141] control plane version: v1.35.0
	I1227 09:18:12.916470  226201 api_server.go:131] duration metric: took 1.010081976s to wait for apiserver health ...
	I1227 09:18:12.916483  226201 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 09:18:12.919937  226201 system_pods.go:59] 9 kube-system pods found
	I1227 09:18:12.920013  226201 system_pods.go:61] "coredns-7d764666f9-cg99w" [0f8f020a-2432-4428-bbf0-b4448d6f8b7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:18:12.920050  226201 system_pods.go:61] "etcd-no-preload-524171" [917f850e-7d12-414f-81ef-5e9baebf15e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:18:12.920068  226201 system_pods.go:61] "kindnet-fgvj4" [a197f9bf-430f-4070-ae5f-f8d1962f365c] Running
	I1227 09:18:12.920077  226201 system_pods.go:61] "kube-apiserver-no-preload-524171" [8be044a4-a7af-4169-a8b8-819d43121f5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:18:12.920088  226201 system_pods.go:61] "kube-controller-manager-no-preload-524171" [c3e33e0e-da6a-4e43-9071-be14a56d2181] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:18:12.920093  226201 system_pods.go:61] "kube-proxy-qpgsj" [17acebe7-2a46-4561-ba4f-c1536076d97a] Running
	I1227 09:18:12.920112  226201 system_pods.go:61] "kube-scheduler-no-preload-524171" [a8c0bc4e-1bb8-40d1-a82d-f9bed47c3257] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:18:12.920124  226201 system_pods.go:61] "metrics-server-5d785b57d4-s7p4z" [b5440fa8-adbb-4d45-b518-89df473a91f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:18:12.920129  226201 system_pods.go:61] "storage-provisioner" [6dfb3476-03f3-448d-bacb-7bf1502de3b1] Running
	I1227 09:18:12.920140  226201 system_pods.go:74] duration metric: took 3.651571ms to wait for pod list to return data ...
	I1227 09:18:12.920148  226201 default_sa.go:34] waiting for default service account to be created ...
	I1227 09:18:12.922855  226201 default_sa.go:45] found service account: "default"
	I1227 09:18:12.922879  226201 default_sa.go:55] duration metric: took 2.725388ms for default service account to be created ...
	I1227 09:18:12.922889  226201 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 09:18:12.925646  226201 system_pods.go:86] 9 kube-system pods found
	I1227 09:18:12.925682  226201 system_pods.go:89] "coredns-7d764666f9-cg99w" [0f8f020a-2432-4428-bbf0-b4448d6f8b7e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 09:18:12.925691  226201 system_pods.go:89] "etcd-no-preload-524171" [917f850e-7d12-414f-81ef-5e9baebf15e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 09:18:12.925697  226201 system_pods.go:89] "kindnet-fgvj4" [a197f9bf-430f-4070-ae5f-f8d1962f365c] Running
	I1227 09:18:12.925705  226201 system_pods.go:89] "kube-apiserver-no-preload-524171" [8be044a4-a7af-4169-a8b8-819d43121f5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 09:18:12.925725  226201 system_pods.go:89] "kube-controller-manager-no-preload-524171" [c3e33e0e-da6a-4e43-9071-be14a56d2181] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 09:18:12.925732  226201 system_pods.go:89] "kube-proxy-qpgsj" [17acebe7-2a46-4561-ba4f-c1536076d97a] Running
	I1227 09:18:12.925751  226201 system_pods.go:89] "kube-scheduler-no-preload-524171" [a8c0bc4e-1bb8-40d1-a82d-f9bed47c3257] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 09:18:12.925759  226201 system_pods.go:89] "metrics-server-5d785b57d4-s7p4z" [b5440fa8-adbb-4d45-b518-89df473a91f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 09:18:12.925767  226201 system_pods.go:89] "storage-provisioner" [6dfb3476-03f3-448d-bacb-7bf1502de3b1] Running
	I1227 09:18:12.925775  226201 system_pods.go:126] duration metric: took 2.880213ms to wait for k8s-apps to be running ...
	I1227 09:18:12.925785  226201 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 09:18:12.925841  226201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:18:12.938773  226201 system_svc.go:56] duration metric: took 12.980326ms WaitForService to wait for kubelet
	I1227 09:18:12.938799  226201 kubeadm.go:587] duration metric: took 7.365674984s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 09:18:12.938817  226201 node_conditions.go:102] verifying NodePressure condition ...
	I1227 09:18:12.941812  226201 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 09:18:12.941845  226201 node_conditions.go:123] node cpu capacity is 2
	I1227 09:18:12.941858  226201 node_conditions.go:105] duration metric: took 3.036654ms to run NodePressure ...
	I1227 09:18:12.941872  226201 start.go:242] waiting for startup goroutines ...
	I1227 09:18:12.941879  226201 start.go:247] waiting for cluster config update ...
	I1227 09:18:12.941890  226201 start.go:256] writing updated cluster config ...
	I1227 09:18:12.942167  226201 ssh_runner.go:195] Run: rm -f paused
	I1227 09:18:12.945723  226201 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:18:12.949088  226201 pod_ready.go:83] waiting for pod "coredns-7d764666f9-cg99w" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 09:18:14.955019  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:17.455131  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:19.455561  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:21.954337  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:23.955555  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:26.454489  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:28.957891  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:31.454255  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:33.457988  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:35.955132  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:37.955236  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:40.454416  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:42.457590  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	W1227 09:18:44.954378  226201 pod_ready.go:104] pod "coredns-7d764666f9-cg99w" is not "Ready", error: <nil>
	I1227 09:18:45.455320  226201 pod_ready.go:94] pod "coredns-7d764666f9-cg99w" is "Ready"
	I1227 09:18:45.455349  226201 pod_ready.go:86] duration metric: took 32.50619442s for pod "coredns-7d764666f9-cg99w" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:45.458557  226201 pod_ready.go:83] waiting for pod "etcd-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:45.463288  226201 pod_ready.go:94] pod "etcd-no-preload-524171" is "Ready"
	I1227 09:18:45.463314  226201 pod_ready.go:86] duration metric: took 4.734032ms for pod "etcd-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:45.465760  226201 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:45.470571  226201 pod_ready.go:94] pod "kube-apiserver-no-preload-524171" is "Ready"
	I1227 09:18:45.470649  226201 pod_ready.go:86] duration metric: took 4.862272ms for pod "kube-apiserver-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:45.473634  226201 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:45.653367  226201 pod_ready.go:94] pod "kube-controller-manager-no-preload-524171" is "Ready"
	I1227 09:18:45.653398  226201 pod_ready.go:86] duration metric: took 179.682931ms for pod "kube-controller-manager-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:45.853767  226201 pod_ready.go:83] waiting for pod "kube-proxy-qpgsj" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:46.253384  226201 pod_ready.go:94] pod "kube-proxy-qpgsj" is "Ready"
	I1227 09:18:46.253454  226201 pod_ready.go:86] duration metric: took 399.662014ms for pod "kube-proxy-qpgsj" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:46.453574  226201 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:46.852842  226201 pod_ready.go:94] pod "kube-scheduler-no-preload-524171" is "Ready"
	I1227 09:18:46.852873  226201 pod_ready.go:86] duration metric: took 399.274053ms for pod "kube-scheduler-no-preload-524171" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 09:18:46.852887  226201 pod_ready.go:40] duration metric: took 33.907134524s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 09:18:46.907402  226201 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 09:18:46.910345  226201 out.go:203] 
	W1227 09:18:46.913170  226201 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 09:18:46.915911  226201 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 09:18:46.918772  226201 out.go:179] * Done! kubectl is now configured to use "no-preload-524171" cluster and "default" namespace by default
	I1227 09:19:03.204484  204666 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000488648s
	I1227 09:19:03.204509  204666 kubeadm.go:319] 
	I1227 09:19:03.204566  204666 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:19:03.204600  204666 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:19:03.204705  204666 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:19:03.204710  204666 kubeadm.go:319] 
	I1227 09:19:03.204814  204666 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:19:03.204846  204666 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:19:03.204877  204666 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:19:03.204881  204666 kubeadm.go:319] 
	I1227 09:19:03.217785  204666 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:19:03.218533  204666 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:19:03.218725  204666 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:19:03.219191  204666 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 09:19:03.219198  204666 kubeadm.go:319] 
	I1227 09:19:03.219319  204666 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 09:19:03.219385  204666 kubeadm.go:403] duration metric: took 8m7.271122438s to StartCluster
	I1227 09:19:03.219439  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:19:03.219506  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:19:03.293507  204666 cri.go:96] found id: ""
	I1227 09:19:03.293587  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.293612  204666 logs.go:284] No container was found matching "kube-apiserver"
	I1227 09:19:03.293653  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 09:19:03.293737  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:19:03.328940  204666 cri.go:96] found id: ""
	I1227 09:19:03.328973  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.328982  204666 logs.go:284] No container was found matching "etcd"
	I1227 09:19:03.328990  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 09:19:03.329064  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:19:03.374166  204666 cri.go:96] found id: ""
	I1227 09:19:03.374236  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.374260  204666 logs.go:284] No container was found matching "coredns"
	I1227 09:19:03.374286  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:19:03.374375  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:19:03.422359  204666 cri.go:96] found id: ""
	I1227 09:19:03.422395  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.422405  204666 logs.go:284] No container was found matching "kube-scheduler"
	I1227 09:19:03.422411  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:19:03.422486  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:19:03.481975  204666 cri.go:96] found id: ""
	I1227 09:19:03.482015  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.482024  204666 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:19:03.482030  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:19:03.482095  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:19:03.538264  204666 cri.go:96] found id: ""
	I1227 09:19:03.538290  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.538300  204666 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 09:19:03.538307  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 09:19:03.538373  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:19:03.592079  204666 cri.go:96] found id: ""
	I1227 09:19:03.592102  204666 logs.go:282] 0 containers: []
	W1227 09:19:03.592110  204666 logs.go:284] No container was found matching "kindnet"
	I1227 09:19:03.592121  204666 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:19:03.592134  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:19:03.692446  204666 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:19:03.683947    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.684806    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.686564    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.686877    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.688421    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 09:19:03.683947    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.684806    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.686564    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.686877    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:03.688421    4828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:19:03.692475  204666 logs.go:123] Gathering logs for containerd ...
	I1227 09:19:03.692487  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 09:19:03.740848  204666 logs.go:123] Gathering logs for container status ...
	I1227 09:19:03.740925  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:19:03.782208  204666 logs.go:123] Gathering logs for kubelet ...
	I1227 09:19:03.782242  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 09:19:03.874946  204666 logs.go:123] Gathering logs for dmesg ...
	I1227 09:19:03.874978  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1227 09:19:03.889356  204666 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000488648s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 09:19:03.889407  204666 out.go:285] * 
	W1227 09:19:03.889455  204666 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000488648s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:19:03.889475  204666 out.go:285] * 
	W1227 09:19:03.889727  204666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:19:03.894675  204666 out.go:203] 
	W1227 09:19:03.897830  204666 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000488648s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:19:03.897891  204666 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 09:19:03.897912  204666 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 09:19:03.901086  204666 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222676117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222689918Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222723092Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222737681Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222746641Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222758268Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222767417Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222779355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222792352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.222828939Z" level=info msg="Connect containerd service"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.223105760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.223640495Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.244895830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.244995835Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.245339987Z" level=info msg="Start subscribing containerd event"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.245415484Z" level=info msg="Start recovering state"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283869264Z" level=info msg="Start event monitor"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283925355Z" level=info msg="Start cni network conf syncer for default"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283934692Z" level=info msg="Start streaming server"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283944325Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283953860Z" level=info msg="runtime interface starting up..."
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.283960424Z" level=info msg="starting plugins..."
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.284136205Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 09:10:54 force-systemd-flag-310604 containerd[758]: time="2025-12-27T09:10:54.284261745Z" level=info msg="containerd successfully booted in 0.082382s"
	Dec 27 09:10:54 force-systemd-flag-310604 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:19:05.796380    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:05.797337    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:05.799226    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:05.799776    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:19:05.801378    4958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015479] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.516409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034238] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.771451] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.481009] kauditd_printk_skb: 39 callbacks suppressed
	[Dec27 08:29] hrtimer: interrupt took 43410871 ns
	
	
	==> kernel <==
	 09:19:05 up  1:01,  0 user,  load average: 2.16, 1.89, 2.01
	Linux force-systemd-flag-310604 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:19:02 force-systemd-flag-310604 kubelet[4764]: E1227 09:19:02.792050    4764 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:19:02 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:19:03 force-systemd-flag-310604 kubelet[4805]: E1227 09:19:03.570849    4805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:19:03 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:19:04 force-systemd-flag-310604 kubelet[4850]: E1227 09:19:04.350937    4850 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:19:04 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:19:05 force-systemd-flag-310604 kubelet[4892]: E1227 09:19:05.421410    4892 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:19:05 force-systemd-flag-310604 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-310604 -n force-systemd-flag-310604
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-310604 -n force-systemd-flag-310604: exit status 6 (522.261034ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:19:06.499840  230836 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-310604" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-310604" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-310604" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-310604
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-310604: (2.182749384s)
--- FAIL: TestForceSystemdFlag (505.94s)

                                                
                                    
x
+
TestForceSystemdEnv (507.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-145961 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-145961 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m23.445686026s)

                                                
                                                
-- stdout --
	* [force-systemd-env-145961] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-145961" primary control-plane node in "force-systemd-env-145961" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:05:16.035412  186168 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:05:16.038113  186168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:16.038146  186168 out.go:374] Setting ErrFile to fd 2...
	I1227 09:05:16.038167  186168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:05:16.038495  186168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 09:05:16.039007  186168 out.go:368] Setting JSON to false
	I1227 09:05:16.039949  186168 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2869,"bootTime":1766823447,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:05:16.040108  186168 start.go:143] virtualization:  
	I1227 09:05:16.044179  186168 out.go:179] * [force-systemd-env-145961] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:05:16.047424  186168 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:05:16.047468  186168 notify.go:221] Checking for updates...
	I1227 09:05:16.053498  186168 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:05:16.056428  186168 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 09:05:16.059365  186168 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 09:05:16.062282  186168 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:05:16.065240  186168 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1227 09:05:16.068966  186168 config.go:182] Loaded profile config "kubernetes-upgrade-535230": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:05:16.069084  186168 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:05:16.103597  186168 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:05:16.103713  186168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:05:16.201976  186168 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:05:16.185340454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:05:16.202077  186168 docker.go:319] overlay module found
	I1227 09:05:16.205114  186168 out.go:179] * Using the docker driver based on user configuration
	I1227 09:05:16.208124  186168 start.go:309] selected driver: docker
	I1227 09:05:16.208152  186168 start.go:928] validating driver "docker" against <nil>
	I1227 09:05:16.208169  186168 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:05:16.208909  186168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:05:16.288666  186168 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:05:16.269220027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:05:16.289135  186168 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:05:16.290968  186168 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:05:16.295702  186168 out.go:179] * Using Docker driver with root privileges
	I1227 09:05:16.298659  186168 cni.go:84] Creating CNI manager for ""
	I1227 09:05:16.298731  186168 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:05:16.298745  186168 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:05:16.298836  186168 start.go:353] cluster config:
	{Name:force-systemd-env-145961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-145961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:05:16.302085  186168 out.go:179] * Starting "force-systemd-env-145961" primary control-plane node in "force-systemd-env-145961" cluster
	I1227 09:05:16.305076  186168 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 09:05:16.308992  186168 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:05:16.312159  186168 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:05:16.312412  186168 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:05:16.312473  186168 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 09:05:16.312486  186168 cache.go:65] Caching tarball of preloaded images
	I1227 09:05:16.312555  186168 preload.go:251] Found /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 09:05:16.312566  186168 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 09:05:16.312696  186168 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/config.json ...
	I1227 09:05:16.312722  186168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/config.json: {Name:mk508d192c793a18ed8ba5e40f210010f8cc3e52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:16.337536  186168 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:05:16.337558  186168 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:05:16.337572  186168 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:05:16.337602  186168 start.go:360] acquireMachinesLock for force-systemd-env-145961: {Name:mk2e4d3b72ffcb2e4b7e522b49dd985ee267bae5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:05:16.337704  186168 start.go:364] duration metric: took 87.287µs to acquireMachinesLock for "force-systemd-env-145961"
	I1227 09:05:16.337727  186168 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-145961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-145961 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 09:05:16.337792  186168 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:05:16.341191  186168 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:05:16.341434  186168 start.go:159] libmachine.API.Create for "force-systemd-env-145961" (driver="docker")
	I1227 09:05:16.341470  186168 client.go:173] LocalClient.Create starting
	I1227 09:05:16.341558  186168 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem
	I1227 09:05:16.341596  186168 main.go:144] libmachine: Decoding PEM data...
	I1227 09:05:16.341611  186168 main.go:144] libmachine: Parsing certificate...
	I1227 09:05:16.341668  186168 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem
	I1227 09:05:16.341689  186168 main.go:144] libmachine: Decoding PEM data...
	I1227 09:05:16.341701  186168 main.go:144] libmachine: Parsing certificate...
	I1227 09:05:16.342083  186168 cli_runner.go:164] Run: docker network inspect force-systemd-env-145961 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:05:16.377586  186168 cli_runner.go:211] docker network inspect force-systemd-env-145961 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:05:16.377688  186168 network_create.go:284] running [docker network inspect force-systemd-env-145961] to gather additional debugging logs...
	I1227 09:05:16.377705  186168 cli_runner.go:164] Run: docker network inspect force-systemd-env-145961
	W1227 09:05:16.404371  186168 cli_runner.go:211] docker network inspect force-systemd-env-145961 returned with exit code 1
	I1227 09:05:16.404401  186168 network_create.go:287] error running [docker network inspect force-systemd-env-145961]: docker network inspect force-systemd-env-145961: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-145961 not found
	I1227 09:05:16.404413  186168 network_create.go:289] output of [docker network inspect force-systemd-env-145961]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-145961 not found
	
	** /stderr **
	I1227 09:05:16.404534  186168 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:05:16.426166  186168 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3499bc401779 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:76:98:a8:d7:e7} reservation:<nil>}
	I1227 09:05:16.426667  186168 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c1260ea8a496 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:1e:3f:a3:f0:1f} reservation:<nil>}
	I1227 09:05:16.427009  186168 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5173b3fb685 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:6a:35:6e:4e:02} reservation:<nil>}
	I1227 09:05:16.427379  186168 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-36c2c1c4ebea IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:66:a8:80:cd:67:8f} reservation:<nil>}
	I1227 09:05:16.428087  186168 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a793b0}
	I1227 09:05:16.428130  186168 network_create.go:124] attempt to create docker network force-systemd-env-145961 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 09:05:16.428188  186168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-145961 force-systemd-env-145961
	I1227 09:05:16.530097  186168 network_create.go:108] docker network force-systemd-env-145961 192.168.85.0/24 created
	I1227 09:05:16.530131  186168 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-145961" container
	I1227 09:05:16.530207  186168 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:05:16.554171  186168 cli_runner.go:164] Run: docker volume create force-systemd-env-145961 --label name.minikube.sigs.k8s.io=force-systemd-env-145961 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:05:16.577919  186168 oci.go:103] Successfully created a docker volume force-systemd-env-145961
	I1227 09:05:16.578015  186168 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-145961-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-145961 --entrypoint /usr/bin/test -v force-systemd-env-145961:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:05:17.238834  186168 oci.go:107] Successfully prepared a docker volume force-systemd-env-145961
	I1227 09:05:17.238896  186168 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:05:17.238906  186168 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:05:17.238982  186168 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-145961:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:05:22.070894  186168 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-145961:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.83186451s)
	I1227 09:05:22.070925  186168 kic.go:203] duration metric: took 4.832015207s to extract preloaded images to volume ...
	W1227 09:05:22.071049  186168 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:05:22.071168  186168 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:05:22.176197  186168 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-145961 --name force-systemd-env-145961 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-145961 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-145961 --network force-systemd-env-145961 --ip 192.168.85.2 --volume force-systemd-env-145961:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:05:22.616540  186168 cli_runner.go:164] Run: docker container inspect force-systemd-env-145961 --format={{.State.Running}}
	I1227 09:05:22.636760  186168 cli_runner.go:164] Run: docker container inspect force-systemd-env-145961 --format={{.State.Status}}
	I1227 09:05:22.663482  186168 cli_runner.go:164] Run: docker exec force-systemd-env-145961 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:05:22.749305  186168 oci.go:144] the created container "force-systemd-env-145961" has a running status.
	I1227 09:05:22.749332  186168 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-env-145961/id_rsa...
	I1227 09:05:22.861954  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-env-145961/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:05:22.864085  186168 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-env-145961/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:05:22.902802  186168 cli_runner.go:164] Run: docker container inspect force-systemd-env-145961 --format={{.State.Status}}
	I1227 09:05:22.930391  186168 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:05:22.930410  186168 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-145961 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:05:22.999438  186168 cli_runner.go:164] Run: docker container inspect force-systemd-env-145961 --format={{.State.Status}}
	I1227 09:05:23.042544  186168 machine.go:94] provisionDockerMachine start ...
	I1227 09:05:23.042625  186168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-145961
	I1227 09:05:23.072798  186168 main.go:144] libmachine: Using SSH client type: native
	I1227 09:05:23.073140  186168 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1227 09:05:23.073150  186168 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:05:23.074215  186168 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:05:26.244277  186168 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-145961
	
	I1227 09:05:26.244350  186168 ubuntu.go:182] provisioning hostname "force-systemd-env-145961"
	I1227 09:05:26.244454  186168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-145961
	I1227 09:05:26.271231  186168 main.go:144] libmachine: Using SSH client type: native
	I1227 09:05:26.271531  186168 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1227 09:05:26.271544  186168 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-145961 && echo "force-systemd-env-145961" | sudo tee /etc/hostname
	I1227 09:05:26.463508  186168 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-145961
	
	I1227 09:05:26.463587  186168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-145961
	I1227 09:05:26.491151  186168 main.go:144] libmachine: Using SSH client type: native
	I1227 09:05:26.491463  186168 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1227 09:05:26.491480  186168 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-145961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-145961/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-145961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:05:26.644303  186168 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:05:26.644390  186168 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-2451/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-2451/.minikube}
	I1227 09:05:26.644448  186168 ubuntu.go:190] setting up certificates
	I1227 09:05:26.644480  186168 provision.go:84] configureAuth start
	I1227 09:05:26.644566  186168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-145961
	I1227 09:05:26.666516  186168 provision.go:143] copyHostCerts
	I1227 09:05:26.666557  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
	I1227 09:05:26.666588  186168 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem, removing ...
	I1227 09:05:26.666595  186168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
	I1227 09:05:26.666669  186168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem (1078 bytes)
	I1227 09:05:26.666739  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
	I1227 09:05:26.666760  186168 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem, removing ...
	I1227 09:05:26.666765  186168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
	I1227 09:05:26.666791  186168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem (1123 bytes)
	I1227 09:05:26.666829  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
	I1227 09:05:26.666844  186168 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem, removing ...
	I1227 09:05:26.666848  186168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
	I1227 09:05:26.666870  186168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem (1679 bytes)
	I1227 09:05:26.666912  186168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-145961 san=[127.0.0.1 192.168.85.2 force-systemd-env-145961 localhost minikube]
	I1227 09:05:26.872919  186168 provision.go:177] copyRemoteCerts
	I1227 09:05:26.873030  186168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:05:26.873118  186168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-145961
	I1227 09:05:26.890939  186168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-env-145961/id_rsa Username:docker}
	I1227 09:05:27.009149  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:05:27.009217  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:05:27.032982  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:05:27.033056  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:05:27.052296  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:05:27.052405  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:05:27.082363  186168 provision.go:87] duration metric: took 437.832305ms to configureAuth
	I1227 09:05:27.082433  186168 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:05:27.082644  186168 config.go:182] Loaded profile config "force-systemd-env-145961": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:05:27.082673  186168 machine.go:97] duration metric: took 4.040112168s to provisionDockerMachine
	I1227 09:05:27.082695  186168 client.go:176] duration metric: took 10.741217076s to LocalClient.Create
	I1227 09:05:27.082725  186168 start.go:167] duration metric: took 10.741291292s to libmachine.API.Create "force-systemd-env-145961"
	I1227 09:05:27.082760  186168 start.go:293] postStartSetup for "force-systemd-env-145961" (driver="docker")
	I1227 09:05:27.082791  186168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:05:27.082873  186168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:05:27.082938  186168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-145961
	I1227 09:05:27.104070  186168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-env-145961/id_rsa Username:docker}
	I1227 09:05:27.228529  186168 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:05:27.233183  186168 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:05:27.233212  186168 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:05:27.233249  186168 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/addons for local assets ...
	I1227 09:05:27.233330  186168 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/files for local assets ...
	I1227 09:05:27.233465  186168 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> 42882.pem in /etc/ssl/certs
	I1227 09:05:27.233496  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> /etc/ssl/certs/42882.pem
	I1227 09:05:27.233625  186168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:05:27.243130  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /etc/ssl/certs/42882.pem (1708 bytes)
	I1227 09:05:27.269130  186168 start.go:296] duration metric: took 186.334282ms for postStartSetup
	I1227 09:05:27.269545  186168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-145961
	I1227 09:05:27.291646  186168 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/config.json ...
	I1227 09:05:27.291940  186168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:05:27.292024  186168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-145961
	I1227 09:05:27.312072  186168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-env-145961/id_rsa Username:docker}
	I1227 09:05:27.417699  186168 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:05:27.422513  186168 start.go:128] duration metric: took 11.084702909s to createHost
	I1227 09:05:27.422538  186168 start.go:83] releasing machines lock for "force-systemd-env-145961", held for 11.084826168s
	I1227 09:05:27.422608  186168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-145961
	I1227 09:05:27.440195  186168 ssh_runner.go:195] Run: cat /version.json
	I1227 09:05:27.440249  186168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-145961
	I1227 09:05:27.440318  186168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:05:27.440384  186168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-145961
	I1227 09:05:27.463888  186168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-env-145961/id_rsa Username:docker}
	I1227 09:05:27.473082  186168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-env-145961/id_rsa Username:docker}
	I1227 09:05:27.584416  186168 ssh_runner.go:195] Run: systemctl --version
	I1227 09:05:27.709588  186168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:05:27.713932  186168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:05:27.714004  186168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:05:27.762371  186168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:05:27.762394  186168 start.go:496] detecting cgroup driver to use...
	I1227 09:05:27.762412  186168 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:05:27.762465  186168 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 09:05:27.784367  186168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:05:27.802122  186168 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:05:27.802200  186168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:05:27.822708  186168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:05:27.846383  186168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:05:27.995321  186168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:05:28.186642  186168 docker.go:234] disabling docker service ...
	I1227 09:05:28.186760  186168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:05:28.225825  186168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:05:28.253907  186168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:05:28.415573  186168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:05:28.581653  186168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:05:28.597479  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:05:28.614029  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:05:28.626770  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:05:28.639911  186168 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:05:28.640023  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:05:28.651234  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:05:28.680493  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:05:28.705259  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:05:28.743762  186168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:05:28.758235  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:05:28.797006  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:05:28.808616  186168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:05:28.831311  186168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:05:28.870303  186168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:05:28.892395  186168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:05:29.181656  186168 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:05:29.502757  186168 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 09:05:29.502834  186168 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 09:05:29.512239  186168 start.go:574] Will wait 60s for crictl version
	I1227 09:05:29.512308  186168 ssh_runner.go:195] Run: which crictl
	I1227 09:05:29.518359  186168 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:05:29.590555  186168 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 09:05:29.590621  186168 ssh_runner.go:195] Run: containerd --version
	I1227 09:05:29.623378  186168 ssh_runner.go:195] Run: containerd --version
	I1227 09:05:29.678483  186168 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 09:05:29.681555  186168 cli_runner.go:164] Run: docker network inspect force-systemd-env-145961 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:05:29.713165  186168 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:05:29.718226  186168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:05:29.739772  186168 kubeadm.go:884] updating cluster {Name:force-systemd-env-145961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-145961 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:05:29.739881  186168 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:05:29.739943  186168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:05:29.786351  186168 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 09:05:29.786370  186168 containerd.go:542] Images already preloaded, skipping extraction
	I1227 09:05:29.786429  186168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:05:29.818299  186168 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 09:05:29.818326  186168 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:05:29.818334  186168 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1227 09:05:29.818422  186168 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-145961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-145961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:05:29.818492  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 09:05:29.865902  186168 cni.go:84] Creating CNI manager for ""
	I1227 09:05:29.865922  186168 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:05:29.865937  186168 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:05:29.865961  186168 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-145961 NodeName:force-systemd-env-145961 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:05:29.866076  186168 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-145961"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:05:29.866138  186168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:05:29.880288  186168 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:05:29.880397  186168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:05:29.890302  186168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1227 09:05:29.908400  186168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:05:29.922774  186168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1227 09:05:29.937016  186168 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:05:29.947854  186168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:05:29.961456  186168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:05:30.174225  186168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:05:30.213560  186168 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961 for IP: 192.168.85.2
	I1227 09:05:30.213578  186168 certs.go:195] generating shared ca certs ...
	I1227 09:05:30.213597  186168 certs.go:227] acquiring lock for ca certs: {Name:mk774ac921aa16ecd5f2d791fd87948cd01f1dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:30.213736  186168 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key
	I1227 09:05:30.213777  186168 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key
	I1227 09:05:30.213785  186168 certs.go:257] generating profile certs ...
	I1227 09:05:30.213842  186168 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/client.key
	I1227 09:05:30.213859  186168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/client.crt with IP's: []
	I1227 09:05:30.706638  186168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/client.crt ...
	I1227 09:05:30.706787  186168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/client.crt: {Name:mkbebc357b815fd75d290792fde3083d7aa6d3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:30.707043  186168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/client.key ...
	I1227 09:05:30.707055  186168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/client.key: {Name:mkb5046e3eaf25e1919f4d7d6b3356922862c9ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:30.710738  186168 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.key.de9a1ae1
	I1227 09:05:30.710774  186168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.crt.de9a1ae1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 09:05:30.887532  186168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.crt.de9a1ae1 ...
	I1227 09:05:30.887613  186168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.crt.de9a1ae1: {Name:mk26f9ef31d1d5cd6ce47813fb405f7d684a3ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:30.887845  186168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.key.de9a1ae1 ...
	I1227 09:05:30.887887  186168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.key.de9a1ae1: {Name:mk646319e1f482a7c54c40147eab4f06ceba7141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:30.888039  186168 certs.go:382] copying /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.crt.de9a1ae1 -> /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.crt
	I1227 09:05:30.888172  186168 certs.go:386] copying /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.key.de9a1ae1 -> /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.key
	I1227 09:05:30.888276  186168 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.key
	I1227 09:05:30.888321  186168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.crt with IP's: []
	I1227 09:05:31.110694  186168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.crt ...
	I1227 09:05:31.110772  186168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.crt: {Name:mkb0f3cf67657964d8aefe10af507377e0e9a8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:31.110987  186168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.key ...
	I1227 09:05:31.111033  186168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.key: {Name:mkbae29c20f1161ce047f893573b0a398eff5403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:05:31.111146  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:05:31.111194  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:05:31.111227  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:05:31.111260  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:05:31.111297  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:05:31.111333  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:05:31.111368  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:05:31.111418  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:05:31.111495  186168 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem (1338 bytes)
	W1227 09:05:31.111564  186168 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288_empty.pem, impossibly tiny 0 bytes
	I1227 09:05:31.111589  186168 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:05:31.111643  186168 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:05:31.111702  186168 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:05:31.111751  186168 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem (1679 bytes)
	I1227 09:05:31.111833  186168 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem (1708 bytes)
	I1227 09:05:31.111890  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem -> /usr/share/ca-certificates/4288.pem
	I1227 09:05:31.111929  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> /usr/share/ca-certificates/42882.pem
	I1227 09:05:31.111963  186168 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:05:31.112589  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:05:31.136034  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1227 09:05:31.160890  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:05:31.183136  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:05:31.203820  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:05:31.256710  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:05:31.288039  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:05:31.307185  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-env-145961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:05:31.327574  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem --> /usr/share/ca-certificates/4288.pem (1338 bytes)
	I1227 09:05:31.347912  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /usr/share/ca-certificates/42882.pem (1708 bytes)
	I1227 09:05:31.366432  186168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:05:31.384279  186168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:05:31.397430  186168 ssh_runner.go:195] Run: openssl version
	I1227 09:05:31.404383  186168 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4288.pem
	I1227 09:05:31.413112  186168 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4288.pem /etc/ssl/certs/4288.pem
	I1227 09:05:31.420767  186168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4288.pem
	I1227 09:05:31.424916  186168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:34 /usr/share/ca-certificates/4288.pem
	I1227 09:05:31.425034  186168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4288.pem
	I1227 09:05:31.472449  186168 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:05:31.481359  186168 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4288.pem /etc/ssl/certs/51391683.0
	I1227 09:05:31.488743  186168 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42882.pem
	I1227 09:05:31.497595  186168 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42882.pem /etc/ssl/certs/42882.pem
	I1227 09:05:31.505295  186168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42882.pem
	I1227 09:05:31.509356  186168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:34 /usr/share/ca-certificates/42882.pem
	I1227 09:05:31.509472  186168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42882.pem
	I1227 09:05:31.554652  186168 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:05:31.563207  186168 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42882.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:05:31.571121  186168 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:05:31.578838  186168 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:05:31.586160  186168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:05:31.590255  186168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:05:31.590398  186168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:05:31.634356  186168 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:05:31.642070  186168 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:05:31.649629  186168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:05:31.654420  186168 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:05:31.654518  186168 kubeadm.go:401] StartCluster: {Name:force-systemd-env-145961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-145961 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:05:31.654629  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 09:05:31.654751  186168 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:05:31.698100  186168 cri.go:96] found id: ""
	I1227 09:05:31.698213  186168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:05:31.710361  186168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:05:31.720349  186168 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:05:31.720454  186168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:05:31.732237  186168 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:05:31.732305  186168 kubeadm.go:158] found existing configuration files:
	
	I1227 09:05:31.732392  186168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:05:31.742352  186168 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:05:31.742457  186168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:05:31.755663  186168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:05:31.765639  186168 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:05:31.765753  186168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:05:31.774155  186168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:05:31.782615  186168 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:05:31.782727  186168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:05:31.790930  186168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:05:31.799295  186168 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:05:31.799413  186168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:05:31.808447  186168 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:05:31.866362  186168 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:05:31.866774  186168 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:05:31.973814  186168 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:05:31.973932  186168 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:05:31.974018  186168 kubeadm.go:319] OS: Linux
	I1227 09:05:31.974086  186168 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:05:31.974161  186168 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:05:31.974251  186168 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:05:31.974317  186168 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:05:31.974371  186168 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:05:31.974439  186168 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:05:31.974491  186168 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:05:31.974546  186168 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:05:31.974604  186168 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:05:32.064989  186168 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:05:32.065195  186168 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:05:32.065343  186168 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:05:32.079735  186168 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:05:32.085281  186168 out.go:252]   - Generating certificates and keys ...
	I1227 09:05:32.085450  186168 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:05:32.085558  186168 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:05:32.291636  186168 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:05:32.547126  186168 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:05:32.752850  186168 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:05:32.913552  186168 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:05:33.166971  186168 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:05:33.167343  186168 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-145961 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:05:33.274433  186168 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:05:33.274574  186168 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-145961 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:05:33.851557  186168 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:05:34.285600  186168 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:05:34.499115  186168 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:05:34.499415  186168 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:05:34.763962  186168 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:05:34.929167  186168 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:05:35.083058  186168 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:05:36.140278  186168 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:05:36.320361  186168 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:05:36.321606  186168 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:05:36.329073  186168 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:05:36.332813  186168 out.go:252]   - Booting up control plane ...
	I1227 09:05:36.332920  186168 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:05:36.332998  186168 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:05:36.333076  186168 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:05:36.367740  186168 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:05:36.367861  186168 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:05:36.384444  186168 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:05:36.384571  186168 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:05:36.384623  186168 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:05:36.653659  186168 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:05:36.654261  186168 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:09:36.654446  186168 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000224824s
	I1227 09:09:36.654489  186168 kubeadm.go:319] 
	I1227 09:09:36.654552  186168 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:09:36.654596  186168 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:09:36.654713  186168 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:09:36.654723  186168 kubeadm.go:319] 
	I1227 09:09:36.654837  186168 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:09:36.654876  186168 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:09:36.654916  186168 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:09:36.654925  186168 kubeadm.go:319] 
	I1227 09:09:36.658948  186168 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:09:36.659379  186168 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:09:36.659498  186168 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:09:36.659777  186168 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 09:09:36.659789  186168 kubeadm.go:319] 
	I1227 09:09:36.659858  186168 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 09:09:36.660007  186168 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-145961 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-145961 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000224824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-145961 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-145961 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000224824s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 09:09:36.660085  186168 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1227 09:09:37.069485  186168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:09:37.083104  186168 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:09:37.083168  186168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:09:37.091313  186168 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:09:37.091336  186168 kubeadm.go:158] found existing configuration files:
	
	I1227 09:09:37.091387  186168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:09:37.100014  186168 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:09:37.100084  186168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:09:37.109993  186168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:09:37.117893  186168 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:09:37.117963  186168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:09:37.125462  186168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:09:37.133810  186168 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:09:37.133889  186168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:09:37.141723  186168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:09:37.150178  186168 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:09:37.150246  186168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:09:37.158391  186168 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:09:37.199756  186168 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:09:37.200073  186168 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:09:37.272385  186168 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:09:37.272458  186168 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:09:37.272496  186168 kubeadm.go:319] OS: Linux
	I1227 09:09:37.272543  186168 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:09:37.272592  186168 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:09:37.272640  186168 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:09:37.272689  186168 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:09:37.272738  186168 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:09:37.272792  186168 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:09:37.272839  186168 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:09:37.272889  186168 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:09:37.272936  186168 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:09:37.349738  186168 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:09:37.349855  186168 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:09:37.349957  186168 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:09:37.355331  186168 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:09:37.360477  186168 out.go:252]   - Generating certificates and keys ...
	I1227 09:09:37.360573  186168 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:09:37.360646  186168 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:09:37.360728  186168 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 09:09:37.360794  186168 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 09:09:37.360868  186168 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 09:09:37.360927  186168 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 09:09:37.360996  186168 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 09:09:37.361061  186168 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 09:09:37.361139  186168 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 09:09:37.361216  186168 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 09:09:37.361257  186168 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 09:09:37.361317  186168 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:09:37.534243  186168 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:09:37.650816  186168 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:09:37.910496  186168 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:09:38.152184  186168 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:09:38.478466  186168 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:09:38.479032  186168 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:09:38.481633  186168 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:09:38.484943  186168 out.go:252]   - Booting up control plane ...
	I1227 09:09:38.485060  186168 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:09:38.485134  186168 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:09:38.485508  186168 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:09:38.510460  186168 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:09:38.510570  186168 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:09:38.523309  186168 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:09:38.523581  186168 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:09:38.523798  186168 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:09:38.676330  186168 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:09:38.676456  186168 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:13:38.676352  186168 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277633s
	I1227 09:13:38.676379  186168 kubeadm.go:319] 
	I1227 09:13:38.676436  186168 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:13:38.676470  186168 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:13:38.676575  186168 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:13:38.676580  186168 kubeadm.go:319] 
	I1227 09:13:38.676684  186168 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:13:38.676716  186168 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:13:38.676747  186168 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:13:38.676751  186168 kubeadm.go:319] 
	I1227 09:13:38.681021  186168 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:13:38.681446  186168 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:13:38.681559  186168 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:13:38.681797  186168 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 09:13:38.681807  186168 kubeadm.go:319] 
	I1227 09:13:38.681877  186168 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 09:13:38.681942  186168 kubeadm.go:403] duration metric: took 8m7.02742731s to StartCluster
	I1227 09:13:38.681979  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:13:38.682043  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:13:38.714462  186168 cri.go:96] found id: ""
	I1227 09:13:38.714496  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.714504  186168 logs.go:284] No container was found matching "kube-apiserver"
	I1227 09:13:38.714511  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 09:13:38.714572  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:13:38.753180  186168 cri.go:96] found id: ""
	I1227 09:13:38.753202  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.753211  186168 logs.go:284] No container was found matching "etcd"
	I1227 09:13:38.753217  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 09:13:38.753277  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:13:38.812117  186168 cri.go:96] found id: ""
	I1227 09:13:38.812139  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.812148  186168 logs.go:284] No container was found matching "coredns"
	I1227 09:13:38.812154  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:13:38.812209  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:13:38.837080  186168 cri.go:96] found id: ""
	I1227 09:13:38.837106  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.837115  186168 logs.go:284] No container was found matching "kube-scheduler"
	I1227 09:13:38.837121  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:13:38.837179  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:13:38.860892  186168 cri.go:96] found id: ""
	I1227 09:13:38.860914  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.860923  186168 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:13:38.860929  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:13:38.860989  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:13:38.886654  186168 cri.go:96] found id: ""
	I1227 09:13:38.886678  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.886689  186168 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 09:13:38.886696  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 09:13:38.886756  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:13:38.911554  186168 cri.go:96] found id: ""
	I1227 09:13:38.911576  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.911584  186168 logs.go:284] No container was found matching "kindnet"
	I1227 09:13:38.911594  186168 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:13:38.911605  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:13:39.229045  186168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:13:39.220260    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.220979    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.222634    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.223338    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.225058    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 09:13:39.220260    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.220979    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.222634    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.223338    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.225058    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:13:39.229067  186168 logs.go:123] Gathering logs for containerd ...
	I1227 09:13:39.229081  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 09:13:39.268424  186168 logs.go:123] Gathering logs for container status ...
	I1227 09:13:39.268460  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:13:39.295793  186168 logs.go:123] Gathering logs for kubelet ...
	I1227 09:13:39.295819  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 09:13:39.354186  186168 logs.go:123] Gathering logs for dmesg ...
	I1227 09:13:39.354220  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1227 09:13:39.369374  186168 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 09:13:39.369422  186168 out.go:285] * 
	* 
	W1227 09:13:39.369472  186168 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:13:39.369488  186168 out.go:285] * 
	* 
	W1227 09:13:39.369742  186168 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:13:39.374752  186168 out.go:203] 
	W1227 09:13:39.378467  186168 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:13:39.378506  186168 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 09:13:39.378531  186168 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 09:13:39.381602  186168 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-145961 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-145961 ssh "cat /etc/containerd/config.toml"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-27 09:13:39.867522107 +0000 UTC m=+2738.139816082
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-145961
helpers_test.go:244: (dbg) docker inspect force-systemd-env-145961:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccc5cecb12a5242daeebd17092391b425ea7cfb242c436b6f9ee1c740fab32ed",
	        "Created": "2025-12-27T09:05:22.20657954Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 187227,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:05:22.284853915Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/ccc5cecb12a5242daeebd17092391b425ea7cfb242c436b6f9ee1c740fab32ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccc5cecb12a5242daeebd17092391b425ea7cfb242c436b6f9ee1c740fab32ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccc5cecb12a5242daeebd17092391b425ea7cfb242c436b6f9ee1c740fab32ed/hosts",
	        "LogPath": "/var/lib/docker/containers/ccc5cecb12a5242daeebd17092391b425ea7cfb242c436b6f9ee1c740fab32ed/ccc5cecb12a5242daeebd17092391b425ea7cfb242c436b6f9ee1c740fab32ed-json.log",
	        "Name": "/force-systemd-env-145961",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-145961:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-145961",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccc5cecb12a5242daeebd17092391b425ea7cfb242c436b6f9ee1c740fab32ed",
	                "LowerDir": "/var/lib/docker/overlay2/0209e345cdb4b13fc5a25b8c390aa73bd31b0647cbfd8dbd69ae23141c6d69d9-init/diff:/var/lib/docker/overlay2/c2f1250c3b92b032a53152a31400b908e250d3d45594ebbf65fa51d032f3248a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0209e345cdb4b13fc5a25b8c390aa73bd31b0647cbfd8dbd69ae23141c6d69d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0209e345cdb4b13fc5a25b8c390aa73bd31b0647cbfd8dbd69ae23141c6d69d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0209e345cdb4b13fc5a25b8c390aa73bd31b0647cbfd8dbd69ae23141c6d69d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-145961",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-145961/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-145961",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-145961",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-145961",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "730821b9d9745a765efce53cae2359002ef265b769dbaa25dfdf761e265d4ab2",
	            "SandboxKey": "/var/run/docker/netns/730821b9d974",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-145961": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:2d:68:1d:a1:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86cffbe12bee89d2dd7beea8c09d76acd25a2f4f3408aab652756d512d2d93f5",
	                    "EndpointID": "0104e2816104bbb0eb446fa57f3cd0394bbc8fc252329e771ee608cebc4d8ea2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-145961",
	                        "ccc5cecb12a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-145961 -n force-systemd-env-145961
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-145961 -n force-systemd-env-145961: exit status 6 (315.937065ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:13:40.188768  209151 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-145961" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-145961 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-224878 sudo cat /var/lib/kubelet/config.yaml                                                                            │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo systemctl status docker --all --full --no-pager                                                             │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo systemctl cat docker --no-pager                                                                             │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo cat /etc/docker/daemon.json                                                                                 │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo docker system info                                                                                          │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo systemctl status cri-docker --all --full --no-pager                                                         │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo systemctl cat cri-docker --no-pager                                                                         │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                    │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo cat /usr/lib/systemd/system/cri-docker.service                                                              │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo cri-dockerd --version                                                                                       │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo systemctl status containerd --all --full --no-pager                                                         │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo systemctl cat containerd --no-pager                                                                         │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo cat /lib/systemd/system/containerd.service                                                                  │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo cat /etc/containerd/config.toml                                                                             │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo containerd config dump                                                                                      │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo systemctl status crio --all --full --no-pager                                                               │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo systemctl cat crio --no-pager                                                                               │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                     │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ ssh     │ -p cilium-224878 sudo crio config                                                                                                 │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │                     │
	│ delete  │ -p cilium-224878                                                                                                                  │ cilium-224878             │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │ 27 Dec 25 09:07 UTC │
	│ start   │ -p cert-expiration-147576 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                      │ cert-expiration-147576    │ jenkins │ v1.37.0 │ 27 Dec 25 09:07 UTC │ 27 Dec 25 09:07 UTC │
	│ start   │ -p cert-expiration-147576 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                   │ cert-expiration-147576    │ jenkins │ v1.37.0 │ 27 Dec 25 09:10 UTC │ 27 Dec 25 09:10 UTC │
	│ delete  │ -p cert-expiration-147576                                                                                                         │ cert-expiration-147576    │ jenkins │ v1.37.0 │ 27 Dec 25 09:10 UTC │ 27 Dec 25 09:10 UTC │
	│ start   │ -p force-systemd-flag-310604 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-flag-310604 │ jenkins │ v1.37.0 │ 27 Dec 25 09:10 UTC │                     │
	│ ssh     │ force-systemd-env-145961 ssh cat /etc/containerd/config.toml                                                                      │ force-systemd-env-145961  │ jenkins │ v1.37.0 │ 27 Dec 25 09:13 UTC │ 27 Dec 25 09:13 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:10:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:10:42.800135  204666 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:10:42.800310  204666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:10:42.800324  204666 out.go:374] Setting ErrFile to fd 2...
	I1227 09:10:42.800331  204666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:10:42.800714  204666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 09:10:42.801241  204666 out.go:368] Setting JSON to false
	I1227 09:10:42.802140  204666 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3196,"bootTime":1766823447,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:10:42.802232  204666 start.go:143] virtualization:  
	I1227 09:10:42.805730  204666 out.go:179] * [force-systemd-flag-310604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:10:42.808307  204666 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:10:42.808421  204666 notify.go:221] Checking for updates...
	I1227 09:10:42.814703  204666 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:10:42.817982  204666 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 09:10:42.821099  204666 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 09:10:42.824151  204666 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:10:42.827145  204666 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:10:42.830746  204666 config.go:182] Loaded profile config "force-systemd-env-145961": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:10:42.830898  204666 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:10:42.863134  204666 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:10:42.863319  204666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:10:42.918342  204666 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:10:42.908953528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:10:42.918445  204666 docker.go:319] overlay module found
	I1227 09:10:42.921686  204666 out.go:179] * Using the docker driver based on user configuration
	I1227 09:10:42.924651  204666 start.go:309] selected driver: docker
	I1227 09:10:42.924672  204666 start.go:928] validating driver "docker" against <nil>
	I1227 09:10:42.924685  204666 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:10:42.925399  204666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:10:43.013713  204666 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:10:42.997716009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:10:43.013872  204666 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:10:43.014115  204666 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:10:43.017181  204666 out.go:179] * Using Docker driver with root privileges
	I1227 09:10:43.020064  204666 cni.go:84] Creating CNI manager for ""
	I1227 09:10:43.020140  204666 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:10:43.020159  204666 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:10:43.020250  204666 start.go:353] cluster config:
	{Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1227 09:10:43.023468  204666 out.go:179] * Starting "force-systemd-flag-310604" primary control-plane node in "force-systemd-flag-310604" cluster
	I1227 09:10:43.026267  204666 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 09:10:43.029182  204666 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:10:43.032164  204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:10:43.032206  204666 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 09:10:43.032217  204666 cache.go:65] Caching tarball of preloaded images
	I1227 09:10:43.032253  204666 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:10:43.032309  204666 preload.go:251] Found /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 09:10:43.032319  204666 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 09:10:43.032459  204666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json ...
	I1227 09:10:43.032480  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json: {Name:mkbc9c01b6cdf50a409317d5cc6b1625281e0c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:43.051266  204666 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:10:43.051291  204666 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:10:43.051312  204666 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:10:43.051342  204666 start.go:360] acquireMachinesLock for force-systemd-flag-310604: {Name:mk07b16eff3a374cb7598dd22df6b68eafb28bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:10:43.051447  204666 start.go:364] duration metric: took 84.235µs to acquireMachinesLock for "force-systemd-flag-310604"
	I1227 09:10:43.051477  204666 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 09:10:43.051550  204666 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:10:43.055029  204666 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:10:43.055272  204666 start.go:159] libmachine.API.Create for "force-systemd-flag-310604" (driver="docker")
	I1227 09:10:43.055308  204666 client.go:173] LocalClient.Create starting
	I1227 09:10:43.055382  204666 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem
	I1227 09:10:43.055425  204666 main.go:144] libmachine: Decoding PEM data...
	I1227 09:10:43.055445  204666 main.go:144] libmachine: Parsing certificate...
	I1227 09:10:43.055497  204666 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem
	I1227 09:10:43.055523  204666 main.go:144] libmachine: Decoding PEM data...
	I1227 09:10:43.055539  204666 main.go:144] libmachine: Parsing certificate...
	I1227 09:10:43.055903  204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:10:43.071470  204666 cli_runner.go:211] docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:10:43.071558  204666 network_create.go:284] running [docker network inspect force-systemd-flag-310604] to gather additional debugging logs...
	I1227 09:10:43.071581  204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604
	W1227 09:10:43.087467  204666 cli_runner.go:211] docker network inspect force-systemd-flag-310604 returned with exit code 1
	I1227 09:10:43.087522  204666 network_create.go:287] error running [docker network inspect force-systemd-flag-310604]: docker network inspect force-systemd-flag-310604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-310604 not found
	I1227 09:10:43.087536  204666 network_create.go:289] output of [docker network inspect force-systemd-flag-310604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-310604 not found
	
	** /stderr **
	I1227 09:10:43.087649  204666 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:10:43.105322  204666 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3499bc401779 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:76:98:a8:d7:e7} reservation:<nil>}
	I1227 09:10:43.105737  204666 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c1260ea8a496 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:1e:3f:a3:f0:1f} reservation:<nil>}
	I1227 09:10:43.106114  204666 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e5173b3fb685 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:c2:6a:35:6e:4e:02} reservation:<nil>}
	I1227 09:10:43.106601  204666 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a15060}
	I1227 09:10:43.106630  204666 network_create.go:124] attempt to create docker network force-systemd-flag-310604 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:10:43.106687  204666 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-310604 force-systemd-flag-310604
	I1227 09:10:43.181323  204666 network_create.go:108] docker network force-systemd-flag-310604 192.168.76.0/24 created
	I1227 09:10:43.181368  204666 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-310604" container
	I1227 09:10:43.181450  204666 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:10:43.199791  204666 cli_runner.go:164] Run: docker volume create force-systemd-flag-310604 --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:10:43.217217  204666 oci.go:103] Successfully created a docker volume force-systemd-flag-310604
	I1227 09:10:43.217303  204666 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-310604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --entrypoint /usr/bin/test -v force-systemd-flag-310604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:10:43.768592  204666 oci.go:107] Successfully prepared a docker volume force-systemd-flag-310604
	I1227 09:10:43.768647  204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:10:43.768656  204666 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:10:43.768730  204666 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-310604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:10:47.941425  204666 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-310604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.172659446s)
	I1227 09:10:47.941459  204666 kic.go:203] duration metric: took 4.172798697s to extract preloaded images to volume ...
	W1227 09:10:47.941608  204666 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:10:47.941723  204666 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:10:48.016863  204666 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-310604 --name force-systemd-flag-310604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-310604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-310604 --network force-systemd-flag-310604 --ip 192.168.76.2 --volume force-systemd-flag-310604:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:10:48.339827  204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Running}}
	I1227 09:10:48.361703  204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
	I1227 09:10:48.386273  204666 cli_runner.go:164] Run: docker exec force-systemd-flag-310604 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:10:48.435149  204666 oci.go:144] the created container "force-systemd-flag-310604" has a running status.
	I1227 09:10:48.435183  204666 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa...
	I1227 09:10:48.595417  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:10:48.595508  204666 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:10:48.621694  204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
	I1227 09:10:48.646093  204666 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:10:48.646113  204666 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-310604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:10:48.702415  204666 cli_runner.go:164] Run: docker container inspect force-systemd-flag-310604 --format={{.State.Status}}
	I1227 09:10:48.724275  204666 machine.go:94] provisionDockerMachine start ...
	I1227 09:10:48.724381  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:48.753127  204666 main.go:144] libmachine: Using SSH client type: native
	I1227 09:10:48.753463  204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1227 09:10:48.753473  204666 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:10:48.754067  204666 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48476->127.0.0.1:33043: read: connection reset by peer
	I1227 09:10:51.891685  204666 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-310604
	
	I1227 09:10:51.891708  204666 ubuntu.go:182] provisioning hostname "force-systemd-flag-310604"
	I1227 09:10:51.891772  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:51.909491  204666 main.go:144] libmachine: Using SSH client type: native
	I1227 09:10:51.909807  204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1227 09:10:51.909825  204666 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-310604 && echo "force-systemd-flag-310604" | sudo tee /etc/hostname
	I1227 09:10:52.057961  204666 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-310604
	
	I1227 09:10:52.058064  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:52.075700  204666 main.go:144] libmachine: Using SSH client type: native
	I1227 09:10:52.076053  204666 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1227 09:10:52.076078  204666 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-310604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-310604/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-310604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:10:52.217368  204666 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:10:52.217456  204666 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22344-2451/.minikube CaCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22344-2451/.minikube}
	I1227 09:10:52.217491  204666 ubuntu.go:190] setting up certificates
	I1227 09:10:52.217534  204666 provision.go:84] configureAuth start
	I1227 09:10:52.217619  204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
	I1227 09:10:52.237744  204666 provision.go:143] copyHostCerts
	I1227 09:10:52.237795  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
	I1227 09:10:52.237833  204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem, removing ...
	I1227 09:10:52.237841  204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem
	I1227 09:10:52.238083  204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/ca.pem (1078 bytes)
	I1227 09:10:52.238190  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
	I1227 09:10:52.238504  204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem, removing ...
	I1227 09:10:52.238511  204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem
	I1227 09:10:52.238894  204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/cert.pem (1123 bytes)
	I1227 09:10:52.239000  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
	I1227 09:10:52.239017  204666 exec_runner.go:144] found /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem, removing ...
	I1227 09:10:52.239022  204666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem
	I1227 09:10:52.239052  204666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22344-2451/.minikube/key.pem (1679 bytes)
	I1227 09:10:52.239110  204666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-310604 san=[127.0.0.1 192.168.76.2 force-systemd-flag-310604 localhost minikube]
	I1227 09:10:52.569945  204666 provision.go:177] copyRemoteCerts
	I1227 09:10:52.570044  204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:10:52.570093  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:52.587912  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:52.687698  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:10:52.687844  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:10:52.705320  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:10:52.705381  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:10:52.723327  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:10:52.723385  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 09:10:52.740566  204666 provision.go:87] duration metric: took 522.993586ms to configureAuth
	I1227 09:10:52.740592  204666 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:10:52.740766  204666 config.go:182] Loaded profile config "force-systemd-flag-310604": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:10:52.740780  204666 machine.go:97] duration metric: took 4.016481436s to provisionDockerMachine
	I1227 09:10:52.740787  204666 client.go:176] duration metric: took 9.685467552s to LocalClient.Create
	I1227 09:10:52.740816  204666 start.go:167] duration metric: took 9.685545363s to libmachine.API.Create "force-systemd-flag-310604"
	I1227 09:10:52.740827  204666 start.go:293] postStartSetup for "force-systemd-flag-310604" (driver="docker")
	I1227 09:10:52.740837  204666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:10:52.740910  204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:10:52.740954  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:52.757935  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:52.856170  204666 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:10:52.859510  204666 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:10:52.859542  204666 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:10:52.859553  204666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/addons for local assets ...
	I1227 09:10:52.859606  204666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22344-2451/.minikube/files for local assets ...
	I1227 09:10:52.859688  204666 filesync.go:149] local asset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> 42882.pem in /etc/ssl/certs
	I1227 09:10:52.859699  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> /etc/ssl/certs/42882.pem
	I1227 09:10:52.859802  204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:10:52.867151  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /etc/ssl/certs/42882.pem (1708 bytes)
	I1227 09:10:52.884851  204666 start.go:296] duration metric: took 144.00855ms for postStartSetup
	I1227 09:10:52.885206  204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
	I1227 09:10:52.901828  204666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/config.json ...
	I1227 09:10:52.902117  204666 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:10:52.902171  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:52.918960  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:53.021390  204666 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:10:53.026246  204666 start.go:128] duration metric: took 9.974681148s to createHost
	I1227 09:10:53.026316  204666 start.go:83] releasing machines lock for "force-systemd-flag-310604", held for 9.974853178s
	I1227 09:10:53.026407  204666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-310604
	I1227 09:10:53.043542  204666 ssh_runner.go:195] Run: cat /version.json
	I1227 09:10:53.043598  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:53.043860  204666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:10:53.043921  204666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-310604
	I1227 09:10:53.061875  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:53.068175  204666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/force-systemd-flag-310604/id_rsa Username:docker}
	I1227 09:10:53.255401  204666 ssh_runner.go:195] Run: systemctl --version
	I1227 09:10:53.262139  204666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:10:53.266534  204666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:10:53.266627  204666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:10:53.295238  204666 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:10:53.295259  204666 start.go:496] detecting cgroup driver to use...
	I1227 09:10:53.295273  204666 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:10:53.295340  204666 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 09:10:53.310658  204666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:10:53.324980  204666 docker.go:218] disabling cri-docker service (if available) ...
	I1227 09:10:53.325045  204666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 09:10:53.342693  204666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 09:10:53.361786  204666 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 09:10:53.481591  204666 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 09:10:53.609612  204666 docker.go:234] disabling docker service ...
	I1227 09:10:53.609677  204666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 09:10:53.632809  204666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 09:10:53.646556  204666 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 09:10:53.776893  204666 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 09:10:53.893803  204666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:10:53.906923  204666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:10:53.921921  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:10:53.930787  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:10:53.940192  204666 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:10:53.940311  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:10:53.949596  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:10:53.959130  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:10:53.967866  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:10:53.977401  204666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:10:53.985565  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:10:53.994878  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:10:54.004397  204666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:10:54.016162  204666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:10:54.025513  204666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:10:54.034319  204666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:10:54.150756  204666 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:10:54.285989  204666 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 09:10:54.286115  204666 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 09:10:54.290075  204666 start.go:574] Will wait 60s for crictl version
	I1227 09:10:54.290185  204666 ssh_runner.go:195] Run: which crictl
	I1227 09:10:54.293949  204666 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:10:54.321666  204666 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 09:10:54.321783  204666 ssh_runner.go:195] Run: containerd --version
	I1227 09:10:54.345867  204666 ssh_runner.go:195] Run: containerd --version
	I1227 09:10:54.376785  204666 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 09:10:54.379751  204666 cli_runner.go:164] Run: docker network inspect force-systemd-flag-310604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:10:54.401792  204666 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:10:54.406481  204666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:10:54.416271  204666 kubeadm.go:884] updating cluster {Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:10:54.416393  204666 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 09:10:54.416457  204666 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:10:54.444036  204666 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 09:10:54.444061  204666 containerd.go:542] Images already preloaded, skipping extraction
	I1227 09:10:54.444118  204666 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 09:10:54.485541  204666 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 09:10:54.485561  204666 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:10:54.485569  204666 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1227 09:10:54.485974  204666 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-310604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:10:54.486092  204666 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 09:10:54.526429  204666 cni.go:84] Creating CNI manager for ""
	I1227 09:10:54.526503  204666 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:10:54.526540  204666 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:10:54.526596  204666 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-310604 NodeName:force-systemd-flag-310604 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:10:54.526756  204666 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-310604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:10:54.526867  204666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:10:54.534776  204666 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:10:54.534862  204666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:10:54.542666  204666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1227 09:10:54.555276  204666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:10:54.568252  204666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1227 09:10:54.581175  204666 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:10:54.584678  204666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:10:54.594342  204666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:10:54.722742  204666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:10:54.739944  204666 certs.go:69] Setting up /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604 for IP: 192.168.76.2
	I1227 09:10:54.739989  204666 certs.go:195] generating shared ca certs ...
	I1227 09:10:54.740005  204666 certs.go:227] acquiring lock for ca certs: {Name:mk774ac921aa16ecd5f2d791fd87948cd01f1dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:54.740163  204666 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key
	I1227 09:10:54.740222  204666 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key
	I1227 09:10:54.740235  204666 certs.go:257] generating profile certs ...
	I1227 09:10:54.740300  204666 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key
	I1227 09:10:54.740327  204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt with IP's: []
	I1227 09:10:54.883927  204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt ...
	I1227 09:10:54.883962  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.crt: {Name:mkaf7a59941c35faf8629e9c6734e607330f0676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:54.884180  204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key ...
	I1227 09:10:54.884200  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/client.key: {Name:mk15fe73d8be76bfb61d2cf22a9a54c4980a1213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:54.884320  204666 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c
	I1227 09:10:54.884341  204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:10:55.261500  204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c ...
	I1227 09:10:55.261538  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c: {Name:mkd8a84348a7ab947593ad31a2bf6eac08baadd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:55.261722  204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c ...
	I1227 09:10:55.261739  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c: {Name:mk0b1844eb49c1d885fbeaa194740cfbf0f66c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:55.261815  204666 certs.go:382] copying /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt.c8f9b72c -> /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt
	I1227 09:10:55.261907  204666 certs.go:386] copying /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key.c8f9b72c -> /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key
	I1227 09:10:55.261975  204666 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key
	I1227 09:10:55.261997  204666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt with IP's: []
	I1227 09:10:55.489265  204666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt ...
	I1227 09:10:55.489301  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt: {Name:mkd9f18caf462c3a8d2a28c4ddec386f0dbd816a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:55.489549  204666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key ...
	I1227 09:10:55.489567  204666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key: {Name:mk9ff37441688b65bb6af030e9075e756fa5b4e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:10:55.489687  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:10:55.489718  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:10:55.489742  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:10:55.489765  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:10:55.489782  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:10:55.489806  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:10:55.489826  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:10:55.489837  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:10:55.489910  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem (1338 bytes)
	W1227 09:10:55.489959  204666 certs.go:480] ignoring /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288_empty.pem, impossibly tiny 0 bytes
	I1227 09:10:55.489975  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 09:10:55.490010  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:10:55.490045  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:10:55.490073  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/key.pem (1679 bytes)
	I1227 09:10:55.490121  204666 certs.go:484] found cert: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem (1708 bytes)
	I1227 09:10:55.490158  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.490176  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem -> /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.490197  204666 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem -> /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.490797  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:10:55.520180  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1227 09:10:55.539728  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:10:55.558726  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1227 09:10:55.577125  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:10:55.595030  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:10:55.612583  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:10:55.629890  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/force-systemd-flag-310604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:10:55.647395  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:10:55.664281  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/certs/4288.pem --> /usr/share/ca-certificates/4288.pem (1338 bytes)
	I1227 09:10:55.682209  204666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/ssl/certs/42882.pem --> /usr/share/ca-certificates/42882.pem (1708 bytes)
	I1227 09:10:55.699375  204666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:10:55.713225  204666 ssh_runner.go:195] Run: openssl version
	I1227 09:10:55.719549  204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.726782  204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42882.pem /etc/ssl/certs/42882.pem
	I1227 09:10:55.734088  204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.737803  204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 08:34 /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.737867  204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42882.pem
	I1227 09:10:55.779013  204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:10:55.786846  204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42882.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:10:55.794882  204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.802676  204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:10:55.810367  204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.814525  204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.814592  204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:10:55.856125  204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:10:55.863440  204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:10:55.870807  204666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.877797  204666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4288.pem /etc/ssl/certs/4288.pem
	I1227 09:10:55.885325  204666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.889003  204666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 08:34 /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.889078  204666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4288.pem
	I1227 09:10:55.930128  204666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:10:55.937477  204666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4288.pem /etc/ssl/certs/51391683.0
	I1227 09:10:55.944699  204666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:10:55.948214  204666 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:10:55.948267  204666 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-310604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-310604 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:10:55.948345  204666 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 09:10:55.948412  204666 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 09:10:55.985105  204666 cri.go:96] found id: ""
	I1227 09:10:55.985202  204666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:10:55.994392  204666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:10:56.002476  204666 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:10:56.002588  204666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:10:56.013561  204666 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:10:56.013641  204666 kubeadm.go:158] found existing configuration files:
	
	I1227 09:10:56.013734  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:10:56.026163  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:10:56.026252  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:10:56.034452  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:10:56.042951  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:10:56.043043  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:10:56.051250  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:10:56.059162  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:10:56.059229  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:10:56.066603  204666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:10:56.074518  204666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:10:56.074592  204666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:10:56.081945  204666 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:10:56.121942  204666 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:10:56.122047  204666 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:10:56.212923  204666 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:10:56.213040  204666 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:10:56.213099  204666 kubeadm.go:319] OS: Linux
	I1227 09:10:56.213162  204666 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:10:56.213227  204666 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:10:56.213298  204666 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:10:56.213364  204666 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:10:56.213434  204666 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:10:56.213512  204666 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:10:56.213583  204666 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:10:56.213655  204666 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:10:56.213718  204666 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:10:56.276575  204666 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:10:56.276758  204666 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:10:56.276888  204666 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:10:56.284403  204666 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:10:56.290757  204666 out.go:252]   - Generating certificates and keys ...
	I1227 09:10:56.290854  204666 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:10:56.290926  204666 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:10:56.622516  204666 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:10:57.129861  204666 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:10:57.426106  204666 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:10:57.593509  204666 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:10:57.874524  204666 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:10:57.874936  204666 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:10:58.122828  204666 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:10:58.123152  204666 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-310604 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:10:58.265970  204666 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:10:58.561360  204666 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:10:58.701478  204666 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:10:58.701573  204666 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:10:58.886739  204666 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:10:59.201465  204666 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:11:00.021317  204666 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:11:00.354783  204666 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:11:00.706525  204666 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:11:00.707614  204666 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:11:00.710676  204666 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:11:00.714234  204666 out.go:252]   - Booting up control plane ...
	I1227 09:11:00.714348  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:11:00.714433  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:11:00.720333  204666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:11:00.746371  204666 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:11:00.746513  204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:11:00.754160  204666 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:11:00.754510  204666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:11:00.754557  204666 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:11:00.882317  204666 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:11:00.882439  204666 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:13:38.676352  186168 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000277633s
	I1227 09:13:38.676379  186168 kubeadm.go:319] 
	I1227 09:13:38.676436  186168 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 09:13:38.676470  186168 kubeadm.go:319] 	- The kubelet is not running
	I1227 09:13:38.676575  186168 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 09:13:38.676580  186168 kubeadm.go:319] 
	I1227 09:13:38.676684  186168 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 09:13:38.676716  186168 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 09:13:38.676747  186168 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 09:13:38.676751  186168 kubeadm.go:319] 
	I1227 09:13:38.681021  186168 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 09:13:38.681446  186168 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 09:13:38.681559  186168 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 09:13:38.681797  186168 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 09:13:38.681807  186168 kubeadm.go:319] 
	I1227 09:13:38.681877  186168 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 09:13:38.681942  186168 kubeadm.go:403] duration metric: took 8m7.02742731s to StartCluster
	I1227 09:13:38.681979  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 09:13:38.682043  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 09:13:38.714462  186168 cri.go:96] found id: ""
	I1227 09:13:38.714496  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.714504  186168 logs.go:284] No container was found matching "kube-apiserver"
	I1227 09:13:38.714511  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 09:13:38.714572  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 09:13:38.753180  186168 cri.go:96] found id: ""
	I1227 09:13:38.753202  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.753211  186168 logs.go:284] No container was found matching "etcd"
	I1227 09:13:38.753217  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 09:13:38.753277  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 09:13:38.812117  186168 cri.go:96] found id: ""
	I1227 09:13:38.812139  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.812148  186168 logs.go:284] No container was found matching "coredns"
	I1227 09:13:38.812154  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 09:13:38.812209  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 09:13:38.837080  186168 cri.go:96] found id: ""
	I1227 09:13:38.837106  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.837115  186168 logs.go:284] No container was found matching "kube-scheduler"
	I1227 09:13:38.837121  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 09:13:38.837179  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 09:13:38.860892  186168 cri.go:96] found id: ""
	I1227 09:13:38.860914  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.860923  186168 logs.go:284] No container was found matching "kube-proxy"
	I1227 09:13:38.860929  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 09:13:38.860989  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 09:13:38.886654  186168 cri.go:96] found id: ""
	I1227 09:13:38.886678  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.886689  186168 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 09:13:38.886696  186168 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 09:13:38.886756  186168 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 09:13:38.911554  186168 cri.go:96] found id: ""
	I1227 09:13:38.911576  186168 logs.go:282] 0 containers: []
	W1227 09:13:38.911584  186168 logs.go:284] No container was found matching "kindnet"
	I1227 09:13:38.911594  186168 logs.go:123] Gathering logs for describe nodes ...
	I1227 09:13:38.911605  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 09:13:39.229045  186168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:13:39.220260    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.220979    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.222634    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.223338    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.225058    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 09:13:39.220260    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.220979    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.222634    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.223338    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:39.225058    4863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 09:13:39.229067  186168 logs.go:123] Gathering logs for containerd ...
	I1227 09:13:39.229081  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 09:13:39.268424  186168 logs.go:123] Gathering logs for container status ...
	I1227 09:13:39.268460  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 09:13:39.295793  186168 logs.go:123] Gathering logs for kubelet ...
	I1227 09:13:39.295819  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 09:13:39.354186  186168 logs.go:123] Gathering logs for dmesg ...
	I1227 09:13:39.354220  186168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1227 09:13:39.369374  186168 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 09:13:39.369422  186168 out.go:285] * 
	W1227 09:13:39.369472  186168 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:13:39.369488  186168 out.go:285] * 
	W1227 09:13:39.369742  186168 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 09:13:39.374752  186168 out.go:203] 
	W1227 09:13:39.378467  186168 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000277633s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 09:13:39.378506  186168 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 09:13:39.378531  186168 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 09:13:39.381602  186168 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359402009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359413751Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359448623Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359464623Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359478876Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359489887Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359501235Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359511959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359524242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359561166Z" level=info msg="Connect containerd service"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.359883746Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.360503191Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.392785429Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.392892663Z" level=info msg="Start subscribing containerd event"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.393817433Z" level=info msg="Start recovering state"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.398162854Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.499565115Z" level=info msg="Start event monitor"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.499629149Z" level=info msg="Start cni network conf syncer for default"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.499639069Z" level=info msg="Start streaming server"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.499649317Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.499657194Z" level=info msg="runtime interface starting up..."
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.499663947Z" level=info msg="starting plugins..."
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.499676665Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 09:05:29 force-systemd-env-145961 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 27 09:05:29 force-systemd-env-145961 containerd[759]: time="2025-12-27T09:05:29.510399949Z" level=info msg="containerd successfully booted in 0.208123s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 09:13:40.845593    4998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:40.846039    4998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:40.847571    4998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:40.848033    4998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 09:13:40.849475    4998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015479] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.516409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034238] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.771451] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.481009] kauditd_printk_skb: 39 callbacks suppressed
	[Dec27 08:29] hrtimer: interrupt took 43410871 ns
	
	
	==> kernel <==
	 09:13:40 up 56 min,  0 user,  load average: 0.44, 1.20, 1.91
	Linux force-systemd-env-145961 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 09:13:37 force-systemd-env-145961 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:13:37 force-systemd-env-145961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 27 09:13:37 force-systemd-env-145961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:13:37 force-systemd-env-145961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:13:38 force-systemd-env-145961 kubelet[4793]: E1227 09:13:38.012965    4793 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:13:38 force-systemd-env-145961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:13:38 force-systemd-env-145961 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:13:38 force-systemd-env-145961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 27 09:13:38 force-systemd-env-145961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:13:38 force-systemd-env-145961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:13:38 force-systemd-env-145961 kubelet[4807]: E1227 09:13:38.786112    4807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:13:38 force-systemd-env-145961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:13:38 force-systemd-env-145961 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:13:39 force-systemd-env-145961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 09:13:39 force-systemd-env-145961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:13:39 force-systemd-env-145961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:13:39 force-systemd-env-145961 kubelet[4887]: E1227 09:13:39.547920    4887 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:13:39 force-systemd-env-145961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:13:39 force-systemd-env-145961 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 09:13:40 force-systemd-env-145961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 09:13:40 force-systemd-env-145961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:13:40 force-systemd-env-145961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 09:13:40 force-systemd-env-145961 kubelet[4915]: E1227 09:13:40.302081    4915 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 09:13:40 force-systemd-env-145961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 09:13:40 force-systemd-env-145961 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-145961 -n force-systemd-env-145961
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-145961 -n force-systemd-env-145961: exit status 6 (506.401453ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:13:41.478217  209382 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-145961" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-145961" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-145961" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-145961
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-145961: (1.972402385s)
--- FAIL: TestForceSystemdEnv (507.52s)

                                                
                                    

Test pass (305/337)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.35.0/json-events 3.35
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.23
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.18
27 TestAddons/Setup 137.66
29 TestAddons/serial/Volcano 40.72
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 8.94
35 TestAddons/parallel/Registry 17.36
36 TestAddons/parallel/RegistryCreds 0.88
37 TestAddons/parallel/Ingress 18.27
38 TestAddons/parallel/InspektorGadget 11.72
39 TestAddons/parallel/MetricsServer 6.82
41 TestAddons/parallel/CSI 40.06
42 TestAddons/parallel/Headlamp 15.8
43 TestAddons/parallel/CloudSpanner 5.63
44 TestAddons/parallel/LocalPath 51.43
45 TestAddons/parallel/NvidiaDevicePlugin 6.56
46 TestAddons/parallel/Yakd 11.8
48 TestAddons/StoppedEnableDisable 12.34
49 TestCertOptions 29.45
50 TestCertExpiration 213.86
54 TestDockerEnvContainerd 41.98
58 TestErrorSpam/setup 27.74
59 TestErrorSpam/start 0.85
60 TestErrorSpam/status 1.15
61 TestErrorSpam/pause 1.75
62 TestErrorSpam/unpause 1.81
63 TestErrorSpam/stop 1.58
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 45.62
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.93
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
75 TestFunctional/serial/CacheCmd/cache/add_local 1.32
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 43.26
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.18
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 8.81
91 TestFunctional/parallel/DryRun 0.48
92 TestFunctional/parallel/InternationalLanguage 0.25
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 7.62
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 20.98
101 TestFunctional/parallel/SSHCmd 0.76
102 TestFunctional/parallel/CpCmd 2.11
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.16
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
113 TestFunctional/parallel/License 0.26
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 1.44
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.92
121 TestFunctional/parallel/ImageCommands/Setup 0.59
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.67
129 TestFunctional/parallel/ProfileCmd/profile_list 0.57
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.46
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.72
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ServiceCmd/DeployApp 8.24
147 TestFunctional/parallel/MountCmd/any-port 8.04
148 TestFunctional/parallel/ServiceCmd/List 0.52
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
151 TestFunctional/parallel/ServiceCmd/Format 0.37
152 TestFunctional/parallel/ServiceCmd/URL 0.42
153 TestFunctional/parallel/MountCmd/specific-port 1.8
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.52
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 150.15
163 TestMultiControlPlane/serial/DeployApp 7.14
164 TestMultiControlPlane/serial/PingHostFromPods 1.7
165 TestMultiControlPlane/serial/AddWorkerNode 30.83
166 TestMultiControlPlane/serial/NodeLabels 0.14
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 20.47
169 TestMultiControlPlane/serial/StopSecondaryNode 12.94
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.25
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.16
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 101.31
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.63
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.73
177 TestMultiControlPlane/serial/RestartCluster 58.89
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.33
179 TestMultiControlPlane/serial/AddSecondaryNode 76.66
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.12
185 TestJSONOutput/start/Command 45.21
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.74
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.66
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.01
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 30.75
211 TestKicCustomNetwork/use_default_bridge_network 29.73
212 TestKicExistingNetwork 32.34
213 TestKicCustomSubnet 31.38
214 TestKicStaticIP 31.16
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 62.21
219 TestMountStart/serial/StartWithMountFirst 8.2
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 8.85
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 7.58
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 74.17
231 TestMultiNode/serial/DeployApp2Nodes 4.76
232 TestMultiNode/serial/PingHostFrom2Pods 0.98
233 TestMultiNode/serial/AddNode 25.81
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.28
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 7.77
239 TestMultiNode/serial/RestartKeepsNodes 79.35
240 TestMultiNode/serial/DeleteNode 5.74
241 TestMultiNode/serial/StopMultiNode 24.22
242 TestMultiNode/serial/RestartMultiNode 48.56
243 TestMultiNode/serial/ValidateNameConflict 30.2
250 TestScheduledStopUnix 100.41
253 TestInsufficientStorage 12.38
254 TestRunningBinaryUpgrade 63.68
256 TestKubernetesUpgrade 348.82
257 TestMissingContainerUpgrade 150.04
259 TestPause/serial/Start 58.05
260 TestPause/serial/SecondStartNoReconfiguration 8.08
261 TestPause/serial/Pause 0.69
262 TestPause/serial/VerifyStatus 0.34
263 TestPause/serial/Unpause 0.64
264 TestPause/serial/PauseAgain 0.84
265 TestPause/serial/DeletePaused 3.02
266 TestPause/serial/VerifyDeletedResources 0.14
267 TestStoppedBinaryUpgrade/Setup 0.84
268 TestStoppedBinaryUpgrade/Upgrade 53.82
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.95
277 TestPreload/Start-NoPreload-PullImage 62.62
278 TestPreload/Restart-With-Preload-Check-User-Image 51.52
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
282 TestNoKubernetes/serial/StartWithK8s 28.83
283 TestNoKubernetes/serial/StartWithStopK8s 22.76
284 TestNoKubernetes/serial/Start 7.72
285 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
287 TestNoKubernetes/serial/ProfileList 1.01
288 TestNoKubernetes/serial/Stop 1.32
289 TestNoKubernetes/serial/StartNoArgs 6.63
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
298 TestNetworkPlugins/group/false 3.56
303 TestStartStop/group/old-k8s-version/serial/FirstStart 58.51
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.5
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.23
306 TestStartStop/group/old-k8s-version/serial/Stop 12.12
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/old-k8s-version/serial/SecondStart 51.93
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
312 TestStartStop/group/old-k8s-version/serial/Pause 3.05
314 TestStartStop/group/no-preload/serial/FirstStart 51.62
315 TestStartStop/group/no-preload/serial/DeployApp 9.33
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.32
317 TestStartStop/group/no-preload/serial/Stop 12.15
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
319 TestStartStop/group/no-preload/serial/SecondStart 48.97
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
323 TestStartStop/group/no-preload/serial/Pause 3.22
325 TestStartStop/group/embed-certs/serial/FirstStart 51.17
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.92
328 TestStartStop/group/embed-certs/serial/DeployApp 9.38
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.42
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
331 TestStartStop/group/embed-certs/serial/Stop 12.25
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
334 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
335 TestStartStop/group/embed-certs/serial/SecondStart 53.62
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.48
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.22
342 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
343 TestStartStop/group/embed-certs/serial/Pause 3.42
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.57
347 TestStartStop/group/newest-cni/serial/FirstStart 36.89
348 TestPreload/PreloadSrc/gcs 4.84
349 TestPreload/PreloadSrc/github 4.36
350 TestPreload/PreloadSrc/gcs-cached 0.71
351 TestNetworkPlugins/group/auto/Start 53.76
352 TestStartStop/group/newest-cni/serial/DeployApp 0
353 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.97
354 TestStartStop/group/newest-cni/serial/Stop 1.63
355 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
356 TestStartStop/group/newest-cni/serial/SecondStart 18.15
357 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
360 TestStartStop/group/newest-cni/serial/Pause 3.11
361 TestNetworkPlugins/group/kindnet/Start 50.88
362 TestNetworkPlugins/group/auto/KubeletFlags 0.42
363 TestNetworkPlugins/group/auto/NetCatPod 11.39
364 TestNetworkPlugins/group/auto/DNS 0.24
365 TestNetworkPlugins/group/auto/Localhost 0.24
366 TestNetworkPlugins/group/auto/HairPin 0.2
367 TestNetworkPlugins/group/calico/Start 71.82
368 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
369 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
370 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
371 TestNetworkPlugins/group/kindnet/DNS 0.3
372 TestNetworkPlugins/group/kindnet/Localhost 0.17
373 TestNetworkPlugins/group/kindnet/HairPin 0.18
374 TestNetworkPlugins/group/custom-flannel/Start 51.05
375 TestNetworkPlugins/group/calico/ControllerPod 6.01
376 TestNetworkPlugins/group/calico/KubeletFlags 0.39
377 TestNetworkPlugins/group/calico/NetCatPod 9.46
378 TestNetworkPlugins/group/calico/DNS 0.35
379 TestNetworkPlugins/group/calico/Localhost 0.17
380 TestNetworkPlugins/group/calico/HairPin 0.23
381 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
382 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.4
383 TestNetworkPlugins/group/enable-default-cni/Start 76.31
384 TestNetworkPlugins/group/custom-flannel/DNS 0.23
385 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
386 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
387 TestNetworkPlugins/group/flannel/Start 52.33
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.28
390 TestNetworkPlugins/group/flannel/ControllerPod 6.01
391 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
392 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
393 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
394 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
395 TestNetworkPlugins/group/flannel/NetCatPod 10.28
396 TestNetworkPlugins/group/flannel/DNS 0.26
397 TestNetworkPlugins/group/flannel/Localhost 0.22
398 TestNetworkPlugins/group/flannel/HairPin 0.33
399 TestNetworkPlugins/group/bridge/Start 47.33
400 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
401 TestNetworkPlugins/group/bridge/NetCatPod 9.39
402 TestNetworkPlugins/group/bridge/DNS 0.17
403 TestNetworkPlugins/group/bridge/Localhost 0.15
404 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-739102 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-739102 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.255945129s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 08:28:07.018304    4288 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1227 08:28:07.018382    4288 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-739102
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-739102: exit status 85 (78.640772ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-739102 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-739102 │ jenkins │ v1.37.0 │ 27 Dec 25 08:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 08:28:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 08:28:01.805139    4294 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:28:01.805271    4294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:28:01.805283    4294 out.go:374] Setting ErrFile to fd 2...
	I1227 08:28:01.805288    4294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:28:01.805528    4294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	W1227 08:28:01.805657    4294 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22344-2451/.minikube/config/config.json: open /home/jenkins/minikube-integration/22344-2451/.minikube/config/config.json: no such file or directory
	I1227 08:28:01.806055    4294 out.go:368] Setting JSON to true
	I1227 08:28:01.806787    4294 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":635,"bootTime":1766823447,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 08:28:01.806851    4294 start.go:143] virtualization:  
	I1227 08:28:01.813061    4294 out.go:99] [download-only-739102] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1227 08:28:01.813248    4294 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 08:28:01.813326    4294 notify.go:221] Checking for updates...
	I1227 08:28:01.816462    4294 out.go:171] MINIKUBE_LOCATION=22344
	I1227 08:28:01.819708    4294 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:28:01.822894    4294 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 08:28:01.826129    4294 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 08:28:01.829283    4294 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 08:28:01.835558    4294 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 08:28:01.835829    4294 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:28:01.865670    4294 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 08:28:01.865772    4294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 08:28:02.279960    4294 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 08:28:02.270586794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 08:28:02.280084    4294 docker.go:319] overlay module found
	I1227 08:28:02.283271    4294 out.go:99] Using the docker driver based on user configuration
	I1227 08:28:02.283304    4294 start.go:309] selected driver: docker
	I1227 08:28:02.283310    4294 start.go:928] validating driver "docker" against <nil>
	I1227 08:28:02.283417    4294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 08:28:02.340534    4294 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 08:28:02.331386883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 08:28:02.340694    4294 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 08:28:02.340971    4294 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 08:28:02.341138    4294 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 08:28:02.344304    4294 out.go:171] Using Docker driver with root privileges
	I1227 08:28:02.347196    4294 cni.go:84] Creating CNI manager for ""
	I1227 08:28:02.347262    4294 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 08:28:02.347275    4294 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 08:28:02.347361    4294 start.go:353] cluster config:
	{Name:download-only-739102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-739102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:28:02.350515    4294 out.go:99] Starting "download-only-739102" primary control-plane node in "download-only-739102" cluster
	I1227 08:28:02.350538    4294 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 08:28:02.353434    4294 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 08:28:02.353471    4294 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 08:28:02.353617    4294 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 08:28:02.370761    4294 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 08:28:02.370934    4294 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 08:28:02.371032    4294 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 08:28:02.406285    4294 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 08:28:02.406311    4294 cache.go:65] Caching tarball of preloaded images
	I1227 08:28:02.406463    4294 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 08:28:02.409819    4294 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 08:28:02.409850    4294 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 08:28:02.409857    4294 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1227 08:28:02.499748    4294 preload.go:313] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1227 08:28:02.499918    4294 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 08:28:05.653099    4294 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1227 08:28:05.653587    4294 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/download-only-739102/config.json ...
	I1227 08:28:05.653628    4294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/download-only-739102/config.json: {Name:mkb99dd09a5ff41ec6cbce7f4cb305f8be480f2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 08:28:05.653803    4294 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 08:28:05.653994    4294 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22344-2451/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-739102 host does not exist
	  To start a cluster, run: "minikube start -p download-only-739102"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-739102
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-308237 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-308237 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.354674836s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 08:28:10.795342    4288 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 08:28:10.795388    4288 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22344-2451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-308237
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-308237: exit status 85 (89.016616ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-739102 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-739102 │ jenkins │ v1.37.0 │ 27 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 08:28 UTC │ 27 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-739102                                                                                                                                                               │ download-only-739102 │ jenkins │ v1.37.0 │ 27 Dec 25 08:28 UTC │ 27 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-308237 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-308237 │ jenkins │ v1.37.0 │ 27 Dec 25 08:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 08:28:07
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 08:28:07.478113    4489 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:28:07.478272    4489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:28:07.478310    4489 out.go:374] Setting ErrFile to fd 2...
	I1227 08:28:07.478330    4489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:28:07.478600    4489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:28:07.479033    4489 out.go:368] Setting JSON to true
	I1227 08:28:07.479788    4489 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":641,"bootTime":1766823447,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 08:28:07.479878    4489 start.go:143] virtualization:  
	I1227 08:28:07.483253    4489 out.go:99] [download-only-308237] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 08:28:07.483585    4489 notify.go:221] Checking for updates...
	I1227 08:28:07.487415    4489 out.go:171] MINIKUBE_LOCATION=22344
	I1227 08:28:07.490452    4489 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:28:07.493505    4489 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 08:28:07.496731    4489 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 08:28:07.499621    4489 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 08:28:07.505199    4489 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 08:28:07.505443    4489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:28:07.526486    4489 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 08:28:07.526580    4489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 08:28:07.590745    4489 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 08:28:07.58165827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 08:28:07.590855    4489 docker.go:319] overlay module found
	I1227 08:28:07.593931    4489 out.go:99] Using the docker driver based on user configuration
	I1227 08:28:07.593979    4489 start.go:309] selected driver: docker
	I1227 08:28:07.593986    4489 start.go:928] validating driver "docker" against <nil>
	I1227 08:28:07.594099    4489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 08:28:07.652473    4489 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 08:28:07.643075061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 08:28:07.652642    4489 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 08:28:07.652924    4489 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 08:28:07.653072    4489 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 08:28:07.656182    4489 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-308237 host does not exist
	  To start a cluster, run: "minikube start -p download-only-308237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-308237
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 08:28:11.948412    4288 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-975267 --alsologtostderr --binary-mirror http://127.0.0.1:39599 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-975267" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-975267
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-130695
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-130695: exit status 85 (197.054059ms)

                                                
                                                
-- stdout --
	* Profile "addons-130695" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-130695"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-130695
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-130695: exit status 85 (178.900631ms)

                                                
                                                
-- stdout --
	* Profile "addons-130695" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-130695"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/Setup (137.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-130695 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-130695 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m17.658420158s)
--- PASS: TestAddons/Setup (137.66s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.72s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 51.892135ms
addons_test.go:870: volcano-scheduler stabilized in 52.524165ms
addons_test.go:886: volcano-controller stabilized in 52.55994ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-ms4bq" [9fbc6e14-7da1-4af2-9960-c6d1520922cd] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003318016s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-x4wgk" [56eb325b-81f5-447f-aa8b-b8f45dabb864] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003420948s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-m48wk" [8f89eb84-8ed4-4e46-af13-729fff2b828c] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003888762s
addons_test.go:905: (dbg) Run:  kubectl --context addons-130695 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-130695 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-130695 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [7625d480-e30d-4fdd-a03e-84aa5ce90833] Pending
helpers_test.go:353: "test-job-nginx-0" [7625d480-e30d-4fdd-a03e-84aa5ce90833] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [7625d480-e30d-4fdd-a03e-84aa5ce90833] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004103476s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-130695 addons disable volcano --alsologtostderr -v=1: (11.90688294s)
--- PASS: TestAddons/serial/Volcano (40.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-130695 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-130695 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.94s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-130695 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-130695 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0231b08e-ca63-444d-be8a-de9b6f94dd71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0231b08e-ca63-444d-be8a-de9b6f94dd71] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.0032178s
addons_test.go:696: (dbg) Run:  kubectl --context addons-130695 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-130695 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-130695 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-130695 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 5.690221ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-sdt7l" [5e385fe0-add9-4035-8480-4701dc41a992] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006298962s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-vpmt4" [d277dca0-9bce-4c9d-8a13-3fbc09fd69ac] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.007067703s
addons_test.go:394: (dbg) Run:  kubectl --context addons-130695 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-130695 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-130695 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.222837756s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 ip
2025/12/27 08:31:46 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.36s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.88s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 29.300471ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-130695
addons_test.go:334: (dbg) Run:  kubectl --context addons-130695 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-130695 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-130695 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-130695 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [9b7932a8-3280-43e0-b76a-e30c4392d5a1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [9b7932a8-3280-43e0-b76a-e30c4392d5a1] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.002720965s
I1227 08:32:59.248541    4288 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-130695 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-130695 addons disable ingress-dns --alsologtostderr -v=1: (1.581618574s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-130695 addons disable ingress --alsologtostderr -v=1: (7.795257332s)
--- PASS: TestAddons/parallel/Ingress (18.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-xszf7" [a0079395-deea-4fa6-963b-d738b3e26737] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003502821s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-130695 addons disable inspektor-gadget --alsologtostderr -v=1: (5.71423883s)
--- PASS: TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.546568ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-428th" [9325a051-217d-4bf6-9522-aea8c7843065] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004341864s
addons_test.go:465: (dbg) Run:  kubectl --context addons-130695 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 08:31:41.290600    4288 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 08:31:41.294771    4288 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 08:31:41.294804    4288 kapi.go:107] duration metric: took 6.191088ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.203118ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-130695 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-130695 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [5127c462-3cd7-428a-b0d0-853898cb4cbf] Pending
helpers_test.go:353: "task-pv-pod" [5127c462-3cd7-428a-b0d0-853898cb4cbf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [5127c462-3cd7-428a-b0d0-853898cb4cbf] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003920127s
addons_test.go:574: (dbg) Run:  kubectl --context addons-130695 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-130695 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-130695 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-130695 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-130695 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-130695 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-130695 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [d4a78483-fe96-4ffe-87e7-73934b14b0c3] Pending
helpers_test.go:353: "task-pv-pod-restore" [d4a78483-fe96-4ffe-87e7-73934b14b0c3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [d4a78483-fe96-4ffe-87e7-73934b14b0c3] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005528288s
addons_test.go:616: (dbg) Run:  kubectl --context addons-130695 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-130695 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-130695 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-130695 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.996049554s)
--- PASS: TestAddons/parallel/CSI (40.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-130695 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-130695 --alsologtostderr -v=1: (1.026018062s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-ncp9s" [1efbb800-1870-4f48-b441-3d8a90c58474] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-ncp9s" [1efbb800-1870-4f48-b441-3d8a90c58474] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003855955s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-130695 addons disable headlamp --alsologtostderr -v=1: (5.771158115s)
--- PASS: TestAddons/parallel/Headlamp (15.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-wnkls" [f5a61479-f13c-4b89-897e-8aa535918fe3] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003372101s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-130695 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-130695 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-130695 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [2b65b414-6741-4e28-b19b-44486fa71a5e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [2b65b414-6741-4e28-b19b-44486fa71a5e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [2b65b414-6741-4e28-b19b-44486fa71a5e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005223639s
addons_test.go:969: (dbg) Run:  kubectl --context addons-130695 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 ssh "cat /opt/local-path-provisioner/pvc-9b262a5c-dc85-40ad-8e7e-b7cd334ebd35_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-130695 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-130695 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-130695 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.048616356s)
--- PASS: TestAddons/parallel/LocalPath (51.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-zss54" [391a7e50-aa35-406c-91f5-2f674655e122] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003756318s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-865bfb49b9-nv2mj" [69d9127e-e505-485b-a33b-90b30bafcfac] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003400628s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-130695 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-130695 addons disable yakd --alsologtostderr -v=1: (5.791877205s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-130695
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-130695: (12.058285993s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-130695
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-130695
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-130695
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (29.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-229858 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-229858 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (26.659340198s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-229858 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-229858 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-229858 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-229858" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-229858
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-229858: (2.047153381s)
--- PASS: TestCertOptions (29.45s)

                                                
                                    
x
+
TestCertExpiration (213.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-147576 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-147576 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.900336359s)
E1227 09:08:33.640680    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:10:30.589670    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-147576 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-147576 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.463397277s)
helpers_test.go:176: Cleaning up "cert-expiration-147576" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-147576
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-147576: (2.496727741s)
--- PASS: TestCertExpiration (213.86s)

                                                
                                    
x
+
TestDockerEnvContainerd (41.98s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-213135 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-213135 --driver=docker  --container-runtime=containerd: (26.681527895s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-213135"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-213135": (1.05986261s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-88IxW602YSY1/agent.24124" SSH_AGENT_PID="24125" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-88IxW602YSY1/agent.24124" SSH_AGENT_PID="24125" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-88IxW602YSY1/agent.24124" SSH_AGENT_PID="24125" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.301008452s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-88IxW602YSY1/agent.24124" SSH_AGENT_PID="24125" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-213135" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-213135
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-213135: (2.073566309s)
--- PASS: TestDockerEnvContainerd (41.98s)

                                                
                                    
x
+
TestErrorSpam/setup (27.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-121594 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-121594 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-121594 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-121594 --driver=docker  --container-runtime=containerd: (27.740958899s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (27.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 stop: (1.386543351s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-121594 --log_dir /tmp/nospam-121594 stop
--- PASS: TestErrorSpam/stop (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22344-2451/.minikube/files/etc/test/nested/copy/4288/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562438 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1227 08:35:30.592374    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:30.598454    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:30.608874    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:30.629160    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:30.669397    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:30.749717    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:30.910151    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:31.230574    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:31.871480    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:33.151739    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-562438 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (45.614451296s)
--- PASS: TestFunctional/serial/StartWithProxy (45.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 08:35:34.199018    4288 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562438 --alsologtostderr -v=8
E1227 08:35:35.712667    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:35:40.832827    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-562438 --alsologtostderr -v=8: (6.930745684s)
functional_test.go:678: soft start took 6.932953064s for "functional-562438" cluster.
I1227 08:35:41.130316    4288 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (6.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-562438 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 cache add registry.k8s.io/pause:3.1: (1.317517437s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 cache add registry.k8s.io/pause:3.3: (1.106415415s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 cache add registry.k8s.io/pause:latest: (1.049825727s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-562438 /tmp/TestFunctionalserialCacheCmdcacheadd_local1848860880/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cache add minikube-local-cache-test:functional-562438
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cache delete minikube-local-cache-test:functional-562438
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-562438
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.930191ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 kubectl -- --context functional-562438 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-562438 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562438 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1227 08:35:51.073836    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:36:11.554135    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-562438 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.264388794s)
functional_test.go:776: restart took 43.264493261s for "functional-562438" cluster.
I1227 08:36:32.030722    4288 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (43.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-562438 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 logs: (1.479453344s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 logs --file /tmp/TestFunctionalserialLogsFileCmd1022341747/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 logs --file /tmp/TestFunctionalserialLogsFileCmd1022341747/001/logs.txt: (1.47486085s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-562438 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-562438
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-562438: exit status 115 (375.090948ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30905 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-562438 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 config get cpus: exit status 14 (58.210284ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 config get cpus: exit status 14 (79.779684ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-562438 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-562438 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 40913: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562438 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562438 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (209.014668ms)

                                                
                                                
-- stdout --
	* [functional-562438] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:37:09.388421   39153 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:37:09.388612   39153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:37:09.388643   39153 out.go:374] Setting ErrFile to fd 2...
	I1227 08:37:09.388693   39153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:37:09.389101   39153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:37:09.389602   39153 out.go:368] Setting JSON to false
	I1227 08:37:09.390551   39153 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1183,"bootTime":1766823447,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 08:37:09.390677   39153 start.go:143] virtualization:  
	I1227 08:37:09.394311   39153 out.go:179] * [functional-562438] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 08:37:09.398177   39153 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 08:37:09.398258   39153 notify.go:221] Checking for updates...
	I1227 08:37:09.404160   39153 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:37:09.407368   39153 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 08:37:09.410338   39153 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 08:37:09.413323   39153 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 08:37:09.416318   39153 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 08:37:09.419829   39153 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:37:09.420548   39153 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:37:09.453872   39153 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 08:37:09.454032   39153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 08:37:09.527871   39153 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 08:37:09.518185625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 08:37:09.528000   39153 docker.go:319] overlay module found
	I1227 08:37:09.531181   39153 out.go:179] * Using the docker driver based on existing profile
	I1227 08:37:09.533980   39153 start.go:309] selected driver: docker
	I1227 08:37:09.533997   39153 start.go:928] validating driver "docker" against &{Name:functional-562438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-562438 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:37:09.534094   39153 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 08:37:09.537542   39153 out.go:203] 
	W1227 08:37:09.540491   39153 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 08:37:09.543350   39153 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562438 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-562438 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562438 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (249.906062ms)

                                                
                                                
-- stdout --
	* [functional-562438] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:37:16.197299   40324 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:37:16.197470   40324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:37:16.197494   40324 out.go:374] Setting ErrFile to fd 2...
	I1227 08:37:16.197520   40324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:37:16.197987   40324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:37:16.198415   40324 out.go:368] Setting JSON to false
	I1227 08:37:16.199466   40324 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1190,"bootTime":1766823447,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 08:37:16.201365   40324 start.go:143] virtualization:  
	I1227 08:37:16.204770   40324 out.go:179] * [functional-562438] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1227 08:37:16.207876   40324 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 08:37:16.208011   40324 notify.go:221] Checking for updates...
	I1227 08:37:16.213731   40324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 08:37:16.216693   40324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 08:37:16.219598   40324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 08:37:16.222603   40324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 08:37:16.225572   40324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 08:37:16.229073   40324 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:37:16.229704   40324 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 08:37:16.277239   40324 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 08:37:16.277387   40324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 08:37:16.351785   40324 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 08:37:16.342037205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 08:37:16.351895   40324 docker.go:319] overlay module found
	I1227 08:37:16.357111   40324 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 08:37:16.360119   40324 start.go:309] selected driver: docker
	I1227 08:37:16.360159   40324 start.go:928] validating driver "docker" against &{Name:functional-562438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-562438 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 08:37:16.360270   40324 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 08:37:16.364058   40324 out.go:203] 
	W1227 08:37:16.367030   40324 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 08:37:16.369717   40324 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-562438 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-562438 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-g446s" [ab5dd757-e7ef-47b6-8c84-a8a0854ec67b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-g446s" [ab5dd757-e7ef-47b6-8c84-a8a0854ec67b] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.002989694s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31310
functional_test.go:1685: http://192.168.49.2:31310: success! body:
Request served by hello-node-connect-5d95464fd4-g446s

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31310
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f2af6a39-e190-4684-b6f3-f3bb6b7116cc] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00404468s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-562438 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-562438 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-562438 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-562438 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [d5e54cd1-5164-45a1-8157-2f6d3f30609c] Pending
helpers_test.go:353: "sp-pod" [d5e54cd1-5164-45a1-8157-2f6d3f30609c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [d5e54cd1-5164-45a1-8157-2f6d3f30609c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003548519s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-562438 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-562438 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-562438 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [d89daa91-0861-4889-8f18-cef782b79304] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [d89daa91-0861-4889-8f18-cef782b79304] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003200375s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-562438 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh -n functional-562438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cp functional-562438:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd496583932/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh -n functional-562438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh -n functional-562438 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/4288/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo cat /etc/test/nested/copy/4288/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/4288.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo cat /etc/ssl/certs/4288.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/4288.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo cat /usr/share/ca-certificates/4288.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/42882.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo cat /etc/ssl/certs/42882.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/42882.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo cat /usr/share/ca-certificates/42882.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-562438 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 ssh "sudo systemctl is-active docker": exit status 1 (385.136203ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 ssh "sudo systemctl is-active crio": exit status 1 (386.477808ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 version -o=json --components: (1.443787735s)
--- PASS: TestFunctional/parallel/Version/components (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562438 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-562438
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562438 image ls --format short --alsologtostderr:
I1227 08:37:24.237967   42020 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:24.238126   42020 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:24.238132   42020 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:24.238136   42020 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:24.238404   42020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
I1227 08:37:24.239335   42020 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:24.239520   42020 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:24.240197   42020 cli_runner.go:164] Run: docker container inspect functional-562438 --format={{.State.Status}}
I1227 08:37:24.261468   42020 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:24.261527   42020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562438
I1227 08:37:24.282284   42020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/functional-562438/id_rsa Username:docker}
I1227 08:37:24.384002   42020 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562438 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ sha256:de369f │ 22.4MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ sha256:ddc842 │ 15.4MB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ docker.io/library/minikube-local-cache-test       │ functional-562438                     │ sha256:977b74 │ 992B   │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ sha256:962dbb │ 23MB   │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ sha256:c3fcf2 │ 24.7MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-562438                     │ sha256:ce2d2c │ 2.17MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                             │ latest                                │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ sha256:88898f │ 20.7MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562438 image ls --format table --alsologtostderr:
I1227 08:37:25.540210   42266 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:25.540366   42266 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:25.540376   42266 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:25.540381   42266 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:25.540648   42266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
I1227 08:37:25.541348   42266 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:25.541464   42266 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:25.541983   42266 cli_runner.go:164] Run: docker container inspect functional-562438 --format={{.State.Status}}
I1227 08:37:25.559917   42266 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:25.560024   42266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562438
I1227 08:37:25.578248   42266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/functional-562438/id_rsa Username:docker}
I1227 08:37:25.674373   42266 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562438 image ls --format json --alsologtostderr:
[{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"22432091"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","re
poDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93e
fc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22987510"},{"id":"sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"24692295"},{"id":"sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"20672243"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["regi
stry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"},{"id":"sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"15405198"},{"id":"sha256:d7b100cd9a77
ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:977b7460ebbf26d5cdb4afee99628fbfb2c49ca60bde14f804efe474cab74c6b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-562438"],"size":"992"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562438 image ls --format json --alsologtostderr:
I1227 08:37:25.253636   42196 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:25.253827   42196 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:25.253850   42196 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:25.253870   42196 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:25.254150   42196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
I1227 08:37:25.254962   42196 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:25.255151   42196 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:25.255827   42196 cli_runner.go:164] Run: docker container inspect functional-562438 --format={{.State.Status}}
I1227 08:37:25.285319   42196 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:25.285451   42196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562438
I1227 08:37:25.324595   42196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/functional-562438/id_rsa Username:docker}
I1227 08:37:25.443716   42196 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-562438 image ls --format yaml --alsologtostderr:
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "24692295"
- id: sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "20672243"
- id: sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "15405198"
- id: sha256:977b7460ebbf26d5cdb4afee99628fbfb2c49ca60bde14f804efe474cab74c6b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-562438
size: "992"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22987510"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "22432091"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "2173567"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562438 image ls --format yaml --alsologtostderr:
I1227 08:37:24.473267   42059 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:24.473459   42059 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:24.473490   42059 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:24.473511   42059 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:24.473914   42059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
I1227 08:37:24.474889   42059 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:24.475091   42059 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:24.475879   42059 cli_runner.go:164] Run: docker container inspect functional-562438 --format={{.State.Status}}
I1227 08:37:24.495335   42059 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:24.495389   42059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562438
I1227 08:37:24.514695   42059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/functional-562438/id_rsa Username:docker}
I1227 08:37:24.616282   42059 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 ssh pgrep buildkitd: exit status 1 (289.121192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image build -t localhost/my-image:functional-562438 testdata/build --alsologtostderr
2025/12/27 08:37:24 [DEBUG] GET http://127.0.0.1:45725/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 image build -t localhost/my-image:functional-562438 testdata/build --alsologtostderr: (3.390296484s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-562438 image build -t localhost/my-image:functional-562438 testdata/build --alsologtostderr:
I1227 08:37:25.022094   42159 out.go:360] Setting OutFile to fd 1 ...
I1227 08:37:25.022446   42159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:25.022483   42159 out.go:374] Setting ErrFile to fd 2...
I1227 08:37:25.022505   42159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 08:37:25.022852   42159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
I1227 08:37:25.023761   42159 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:25.026482   42159 config.go:182] Loaded profile config "functional-562438": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 08:37:25.027249   42159 cli_runner.go:164] Run: docker container inspect functional-562438 --format={{.State.Status}}
I1227 08:37:25.046034   42159 ssh_runner.go:195] Run: systemctl --version
I1227 08:37:25.046120   42159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562438
I1227 08:37:25.064514   42159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/functional-562438/id_rsa Username:docker}
I1227 08:37:25.166922   42159 build_images.go:162] Building image from path: /tmp/build.804969573.tar
I1227 08:37:25.167017   42159 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 08:37:25.176087   42159 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.804969573.tar
I1227 08:37:25.180073   42159 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.804969573.tar: stat -c "%s %y" /var/lib/minikube/build/build.804969573.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.804969573.tar': No such file or directory
I1227 08:37:25.180110   42159 ssh_runner.go:362] scp /tmp/build.804969573.tar --> /var/lib/minikube/build/build.804969573.tar (3072 bytes)
I1227 08:37:25.211168   42159 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.804969573
I1227 08:37:25.220510   42159 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.804969573 -xf /var/lib/minikube/build/build.804969573.tar
I1227 08:37:25.230587   42159 containerd.go:402] Building image: /var/lib/minikube/build/build.804969573
I1227 08:37:25.230662   42159 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.804969573 --local dockerfile=/var/lib/minikube/build/build.804969573 --output type=image,name=localhost/my-image:functional-562438
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:60fb40df9869ea3e105fec7a579288f8cd2a057c27d2c1fff55c0fbce14a1f89
#8 exporting manifest sha256:60fb40df9869ea3e105fec7a579288f8cd2a057c27d2c1fff55c0fbce14a1f89 0.0s done
#8 exporting config sha256:397105701f74c4e65cdd72d0695f95a08d4dd930ef36e6360b1deaa902520938 0.0s done
#8 naming to localhost/my-image:functional-562438 done
#8 DONE 0.2s
I1227 08:37:28.314268   42159 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.804969573 --local dockerfile=/var/lib/minikube/build/build.804969573 --output type=image,name=localhost/my-image:functional-562438: (3.083582174s)
I1227 08:37:28.314343   42159 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.804969573
I1227 08:37:28.323537   42159 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.804969573.tar
I1227 08:37:28.332518   42159 build_images.go:218] Built localhost/my-image:functional-562438 from /tmp/build.804969573.tar
I1227 08:37:28.332558   42159 build_images.go:134] succeeded building to: functional-562438
I1227 08:37:28.332563   42159 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 --alsologtostderr: (1.215258617s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 --alsologtostderr: (1.087531468s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-562438 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 --alsologtostderr: (1.127496496s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "495.134053ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "74.152796ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "510.761385ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "73.814597ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562438 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562438 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-562438 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-562438 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 38022: os: process already finished
helpers_test.go:526: unable to kill pid 37838: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-562438 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-562438 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [3c16aa1b-94e1-4405-9ac4-f59b9cf13cc2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [3c16aa1b-94e1-4405-9ac4-f59b9cf13cc2] Running
E1227 08:36:52.515343    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003163108s
I1227 08:36:56.594696    4288 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-562438 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.130.250 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-562438 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-562438 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-562438 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-wmhsg" [0af62cbd-ed66-4414-92e0-58aead9be242] Pending
helpers_test.go:353: "hello-node-684ffdf98c-wmhsg" [0af62cbd-ed66-4414-92e0-58aead9be242] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003783816s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdany-port68953838/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766824629825676689" to /tmp/TestFunctionalparallelMountCmdany-port68953838/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766824629825676689" to /tmp/TestFunctionalparallelMountCmdany-port68953838/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766824629825676689" to /tmp/TestFunctionalparallelMountCmdany-port68953838/001/test-1766824629825676689
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.7197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 08:37:10.180280    4288 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 08:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 08:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 08:37 test-1766824629825676689
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh cat /mount-9p/test-1766824629825676689
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-562438 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [44e1f022-2fef-445a-b0db-ad44ae47ffd2] Pending
helpers_test.go:353: "busybox-mount" [44e1f022-2fef-445a-b0db-ad44ae47ffd2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [44e1f022-2fef-445a-b0db-ad44ae47ffd2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [44e1f022-2fef-445a-b0db-ad44ae47ffd2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004260533s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-562438 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdany-port68953838/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 service list -o json
functional_test.go:1509: Took "521.504731ms" to run "out/minikube-linux-arm64 -p functional-562438 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31562
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31562
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdspecific-port2562657327/001:/mount-9p --alsologtostderr -v=1 --port 33791]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdspecific-port2562657327/001:/mount-9p --alsologtostderr -v=1 --port 33791] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 ssh "sudo umount -f /mount-9p": exit status 1 (411.689598ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-562438 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdspecific-port2562657327/001:/mount-9p --alsologtostderr -v=1 --port 33791] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2741428578/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2741428578/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2741428578/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T" /mount1: exit status 1 (1.067481411s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 08:37:20.741907    4288 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-562438 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-562438 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2741428578/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2741428578/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-562438 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2741428578/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-562438
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-562438
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-562438
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (150.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1227 08:38:14.436224    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m29.219419449s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (150.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 kubectl -- rollout status deployment/busybox: (4.148704643s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-ctcw8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-jkc4v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-xz9jj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-ctcw8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-jkc4v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-xz9jj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-ctcw8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-jkc4v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-xz9jj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-ctcw8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-ctcw8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-jkc4v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-jkc4v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-xz9jj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 kubectl -- exec busybox-769dd8b7dd-xz9jj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 node add --alsologtostderr -v 5
E1227 08:40:30.589631    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 node add --alsologtostderr -v 5: (29.791753077s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5: (1.041265109s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-243995 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.106211356s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 status --output json --alsologtostderr -v 5: (1.083610273s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp testdata/cp-test.txt ha-243995:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile250752762/001/cp-test_ha-243995.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995:/home/docker/cp-test.txt ha-243995-m02:/home/docker/cp-test_ha-243995_ha-243995-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m02 "sudo cat /home/docker/cp-test_ha-243995_ha-243995-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995:/home/docker/cp-test.txt ha-243995-m03:/home/docker/cp-test_ha-243995_ha-243995-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m03 "sudo cat /home/docker/cp-test_ha-243995_ha-243995-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995:/home/docker/cp-test.txt ha-243995-m04:/home/docker/cp-test_ha-243995_ha-243995-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m04 "sudo cat /home/docker/cp-test_ha-243995_ha-243995-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp testdata/cp-test.txt ha-243995-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile250752762/001/cp-test_ha-243995-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m02:/home/docker/cp-test.txt ha-243995:/home/docker/cp-test_ha-243995-m02_ha-243995.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995 "sudo cat /home/docker/cp-test_ha-243995-m02_ha-243995.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m02:/home/docker/cp-test.txt ha-243995-m03:/home/docker/cp-test_ha-243995-m02_ha-243995-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m03 "sudo cat /home/docker/cp-test_ha-243995-m02_ha-243995-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m02:/home/docker/cp-test.txt ha-243995-m04:/home/docker/cp-test_ha-243995-m02_ha-243995-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m04 "sudo cat /home/docker/cp-test_ha-243995-m02_ha-243995-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp testdata/cp-test.txt ha-243995-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile250752762/001/cp-test_ha-243995-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m03:/home/docker/cp-test.txt ha-243995:/home/docker/cp-test_ha-243995-m03_ha-243995.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995 "sudo cat /home/docker/cp-test_ha-243995-m03_ha-243995.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m03:/home/docker/cp-test.txt ha-243995-m02:/home/docker/cp-test_ha-243995-m03_ha-243995-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m02 "sudo cat /home/docker/cp-test_ha-243995-m03_ha-243995-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m03:/home/docker/cp-test.txt ha-243995-m04:/home/docker/cp-test_ha-243995-m03_ha-243995-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m04 "sudo cat /home/docker/cp-test_ha-243995-m03_ha-243995-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp testdata/cp-test.txt ha-243995-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m04 "sudo cat /home/docker/cp-test.txt"
E1227 08:40:58.277298    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile250752762/001/cp-test_ha-243995-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m04:/home/docker/cp-test.txt ha-243995:/home/docker/cp-test_ha-243995-m04_ha-243995.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995 "sudo cat /home/docker/cp-test_ha-243995-m04_ha-243995.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m04:/home/docker/cp-test.txt ha-243995-m02:/home/docker/cp-test_ha-243995-m04_ha-243995-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m02 "sudo cat /home/docker/cp-test_ha-243995-m04_ha-243995-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 cp ha-243995-m04:/home/docker/cp-test.txt ha-243995-m03:/home/docker/cp-test_ha-243995-m04_ha-243995-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 ssh -n ha-243995-m03 "sudo cat /home/docker/cp-test_ha-243995-m04_ha-243995-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 node stop m02 --alsologtostderr -v 5: (12.174100207s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5: exit status 7 (765.565297ms)

                                                
                                                
-- stdout --
	ha-243995
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-243995-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243995-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-243995-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:41:15.145213   58678 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:41:15.145418   58678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:41:15.145443   58678 out.go:374] Setting ErrFile to fd 2...
	I1227 08:41:15.145504   58678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:41:15.145862   58678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:41:15.146157   58678 out.go:368] Setting JSON to false
	I1227 08:41:15.146234   58678 mustload.go:66] Loading cluster: ha-243995
	I1227 08:41:15.146311   58678 notify.go:221] Checking for updates...
	I1227 08:41:15.147821   58678 config.go:182] Loaded profile config "ha-243995": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:41:15.147878   58678 status.go:174] checking status of ha-243995 ...
	I1227 08:41:15.148631   58678 cli_runner.go:164] Run: docker container inspect ha-243995 --format={{.State.Status}}
	I1227 08:41:15.169640   58678 status.go:371] ha-243995 host status = "Running" (err=<nil>)
	I1227 08:41:15.169663   58678 host.go:66] Checking if "ha-243995" exists ...
	I1227 08:41:15.169966   58678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-243995
	I1227 08:41:15.201245   58678 host.go:66] Checking if "ha-243995" exists ...
	I1227 08:41:15.201543   58678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:41:15.201590   58678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-243995
	I1227 08:41:15.224219   58678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/ha-243995/id_rsa Username:docker}
	I1227 08:41:15.329463   58678 ssh_runner.go:195] Run: systemctl --version
	I1227 08:41:15.335849   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:41:15.349685   58678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 08:41:15.411507   58678 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-27 08:41:15.401538443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 08:41:15.412200   58678 kubeconfig.go:125] found "ha-243995" server: "https://192.168.49.254:8443"
	I1227 08:41:15.412248   58678 api_server.go:166] Checking apiserver status ...
	I1227 08:41:15.412301   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:41:15.425303   58678 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup
	I1227 08:41:15.434273   58678 api_server.go:192] apiserver freezer: "7:freezer:/docker/24b7dad03563f693de1d0a3e124407822ce435ec8467a763f6f64693fd658d29/kubepods/burstable/pod2b4649f66310ddeed450b30fe1712a7e/9a25257dbf589c0f614c8e765e730eaccbf600b1d547180247c2440d2bd23323"
	I1227 08:41:15.434343   58678 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/24b7dad03563f693de1d0a3e124407822ce435ec8467a763f6f64693fd658d29/kubepods/burstable/pod2b4649f66310ddeed450b30fe1712a7e/9a25257dbf589c0f614c8e765e730eaccbf600b1d547180247c2440d2bd23323/freezer.state
	I1227 08:41:15.442391   58678 api_server.go:214] freezer state: "THAWED"
	I1227 08:41:15.442438   58678 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 08:41:15.450811   58678 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 08:41:15.450839   58678 status.go:463] ha-243995 apiserver status = Running (err=<nil>)
	I1227 08:41:15.450850   58678 status.go:176] ha-243995 status: &{Name:ha-243995 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:41:15.450866   58678 status.go:174] checking status of ha-243995-m02 ...
	I1227 08:41:15.451166   58678 cli_runner.go:164] Run: docker container inspect ha-243995-m02 --format={{.State.Status}}
	I1227 08:41:15.468655   58678 status.go:371] ha-243995-m02 host status = "Stopped" (err=<nil>)
	I1227 08:41:15.468680   58678 status.go:384] host is not running, skipping remaining checks
	I1227 08:41:15.468687   58678 status.go:176] ha-243995-m02 status: &{Name:ha-243995-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:41:15.468707   58678 status.go:174] checking status of ha-243995-m03 ...
	I1227 08:41:15.469059   58678 cli_runner.go:164] Run: docker container inspect ha-243995-m03 --format={{.State.Status}}
	I1227 08:41:15.486709   58678 status.go:371] ha-243995-m03 host status = "Running" (err=<nil>)
	I1227 08:41:15.486741   58678 host.go:66] Checking if "ha-243995-m03" exists ...
	I1227 08:41:15.487115   58678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-243995-m03
	I1227 08:41:15.504364   58678 host.go:66] Checking if "ha-243995-m03" exists ...
	I1227 08:41:15.504688   58678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:41:15.504732   58678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-243995-m03
	I1227 08:41:15.525897   58678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/ha-243995-m03/id_rsa Username:docker}
	I1227 08:41:15.625758   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:41:15.639752   58678 kubeconfig.go:125] found "ha-243995" server: "https://192.168.49.254:8443"
	I1227 08:41:15.639778   58678 api_server.go:166] Checking apiserver status ...
	I1227 08:41:15.639822   58678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:41:15.652590   58678 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1344/cgroup
	I1227 08:41:15.661505   58678 api_server.go:192] apiserver freezer: "7:freezer:/docker/09c4984f6ac9290d7528c0ff04669500461c527eed1d61c3503ec0684c4eff32/kubepods/burstable/pod55aeeb6a1e2dac70d838b3761ec9fcbb/8eea3f0a0a807c0b5f109732dc9216a4b638c1f4fb35f147f918a15c742dcb78"
	I1227 08:41:15.661578   58678 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/09c4984f6ac9290d7528c0ff04669500461c527eed1d61c3503ec0684c4eff32/kubepods/burstable/pod55aeeb6a1e2dac70d838b3761ec9fcbb/8eea3f0a0a807c0b5f109732dc9216a4b638c1f4fb35f147f918a15c742dcb78/freezer.state
	I1227 08:41:15.670839   58678 api_server.go:214] freezer state: "THAWED"
	I1227 08:41:15.670868   58678 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 08:41:15.679218   58678 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 08:41:15.679251   58678 status.go:463] ha-243995-m03 apiserver status = Running (err=<nil>)
	I1227 08:41:15.679261   58678 status.go:176] ha-243995-m03 status: &{Name:ha-243995-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:41:15.679300   58678 status.go:174] checking status of ha-243995-m04 ...
	I1227 08:41:15.679636   58678 cli_runner.go:164] Run: docker container inspect ha-243995-m04 --format={{.State.Status}}
	I1227 08:41:15.698792   58678 status.go:371] ha-243995-m04 host status = "Running" (err=<nil>)
	I1227 08:41:15.698822   58678 host.go:66] Checking if "ha-243995-m04" exists ...
	I1227 08:41:15.699126   58678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-243995-m04
	I1227 08:41:15.718765   58678 host.go:66] Checking if "ha-243995-m04" exists ...
	I1227 08:41:15.719077   58678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:41:15.719129   58678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-243995-m04
	I1227 08:41:15.737733   58678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/ha-243995-m04/id_rsa Username:docker}
	I1227 08:41:15.837634   58678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:41:15.851569   58678 status.go:176] ha-243995-m04 status: &{Name:ha-243995-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 node start m02 --alsologtostderr -v 5: (12.039921954s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5: (1.103478294s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.15549984s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (101.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 stop --alsologtostderr -v 5
E1227 08:41:47.166409    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:47.171707    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:47.181992    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:47.202327    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:47.242672    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:47.322986    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:47.483376    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:47.803934    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:48.444942    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:49.725437    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:52.286414    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:41:57.407376    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:42:07.648414    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 stop --alsologtostderr -v 5: (37.675616461s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 start --wait true --alsologtostderr -v 5
E1227 08:42:28.128942    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:43:09.089158    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 start --wait true --alsologtostderr -v 5: (1m3.476933265s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (101.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 node delete m03 --alsologtostderr -v 5: (9.692538276s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 stop --alsologtostderr -v 5: (36.525885364s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5: exit status 7 (207.879614ms)

                                                
                                                
-- stdout --
	ha-243995
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243995-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243995-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:44:00.438409   73203 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:44:00.438971   73203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:44:00.439020   73203 out.go:374] Setting ErrFile to fd 2...
	I1227 08:44:00.440133   73203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:44:00.442377   73203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:44:00.442745   73203 out.go:368] Setting JSON to false
	I1227 08:44:00.442783   73203 mustload.go:66] Loading cluster: ha-243995
	I1227 08:44:00.443017   73203 notify.go:221] Checking for updates...
	I1227 08:44:00.444540   73203 config.go:182] Loaded profile config "ha-243995": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:44:00.444584   73203 status.go:174] checking status of ha-243995 ...
	I1227 08:44:00.446556   73203 cli_runner.go:164] Run: docker container inspect ha-243995 --format={{.State.Status}}
	I1227 08:44:00.467851   73203 status.go:371] ha-243995 host status = "Stopped" (err=<nil>)
	I1227 08:44:00.467874   73203 status.go:384] host is not running, skipping remaining checks
	I1227 08:44:00.467881   73203 status.go:176] ha-243995 status: &{Name:ha-243995 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:44:00.467906   73203 status.go:174] checking status of ha-243995-m02 ...
	I1227 08:44:00.468358   73203 cli_runner.go:164] Run: docker container inspect ha-243995-m02 --format={{.State.Status}}
	I1227 08:44:00.525261   73203 status.go:371] ha-243995-m02 host status = "Stopped" (err=<nil>)
	I1227 08:44:00.525295   73203 status.go:384] host is not running, skipping remaining checks
	I1227 08:44:00.525303   73203 status.go:176] ha-243995-m02 status: &{Name:ha-243995-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:44:00.525322   73203 status.go:174] checking status of ha-243995-m04 ...
	I1227 08:44:00.525651   73203 cli_runner.go:164] Run: docker container inspect ha-243995-m04 --format={{.State.Status}}
	I1227 08:44:00.554557   73203 status.go:371] ha-243995-m04 host status = "Stopped" (err=<nil>)
	I1227 08:44:00.554583   73203 status.go:384] host is not running, skipping remaining checks
	I1227 08:44:00.554591   73203 status.go:176] ha-243995-m04 status: &{Name:ha-243995-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1227 08:44:31.009971    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (57.8935824s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.329760311s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 node add --control-plane --alsologtostderr -v 5
E1227 08:45:30.589631    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 node add --control-plane --alsologtostderr -v 5: (1m15.529525061s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-243995 status --alsologtostderr -v 5: (1.132740341s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.115837105s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-248567 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1227 08:46:47.166273    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-248567 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (45.207537063s)
--- PASS: TestJSONOutput/start/Command (45.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-248567 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-248567 --output=json --user=testUser
E1227 08:47:14.850510    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-248567 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-248567 --output=json --user=testUser: (6.013435954s)
--- PASS: TestJSONOutput/stop/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-807866 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-807866 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (88.19847ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"743014bf-e71f-4dd5-983a-79b1e2371550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-807866] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba66695f-ef5e-481f-a62c-3f8146bf244e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22344"}}
	{"specversion":"1.0","id":"69d75d61-4b1d-4c38-87a8-82289da76c84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c33d48f6-3d37-4682-a31c-6d6550ed553b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig"}}
	{"specversion":"1.0","id":"4352952a-27db-4ff1-a97f-2005a648b2d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube"}}
	{"specversion":"1.0","id":"18d19500-c3f2-44ea-88af-4a09cd2059eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3b0809af-3ed4-4e6a-9ab4-e55aba569bc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2e1f4bd8-9c15-4d8c-83cb-66d9e0a80e81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-807866" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-807866
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-411476 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-411476 --network=: (28.445868381s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-411476" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-411476
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-411476: (2.281316305s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.75s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-694005 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-694005 --network=bridge: (27.607435795s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-694005" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-694005
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-694005: (2.096335415s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.73s)

                                                
                                    
x
+
TestKicExistingNetwork (32.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 08:48:26.517285    4288 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 08:48:26.532982    4288 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 08:48:26.533079    4288 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 08:48:26.533098    4288 cli_runner.go:164] Run: docker network inspect existing-network
W1227 08:48:26.549462    4288 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 08:48:26.549495    4288 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 08:48:26.549508    4288 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 08:48:26.549643    4288 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 08:48:26.571478    4288 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3499bc401779 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:76:98:a8:d7:e7} reservation:<nil>}
I1227 08:48:26.571786    4288 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016d7bf0}
I1227 08:48:26.572625    4288 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 08:48:26.572702    4288 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 08:48:26.648873    4288 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-895137 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-895137 --network=existing-network: (30.123267783s)
helpers_test.go:176: Cleaning up "existing-network-895137" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-895137
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-895137: (2.041025519s)
I1227 08:48:58.837466    4288 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.34s)

                                                
                                    
x
+
TestKicCustomSubnet (31.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-881274 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-881274 --subnet=192.168.60.0/24: (29.036420438s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-881274 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-881274" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-881274
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-881274: (2.323727619s)
--- PASS: TestKicCustomSubnet (31.38s)

                                                
                                    
x
+
TestKicStaticIP (31.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-376625 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-376625 --static-ip=192.168.200.200: (28.639553283s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-376625 ip
helpers_test.go:176: Cleaning up "static-ip-376625" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-376625
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-376625: (2.355433026s)
--- PASS: TestKicStaticIP (31.16s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (62.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-812968 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-812968 --driver=docker  --container-runtime=containerd: (26.61412198s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-815758 --driver=docker  --container-runtime=containerd
E1227 08:50:30.589685    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-815758 --driver=docker  --container-runtime=containerd: (29.711254558s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-812968
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-815758
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-815758" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-815758
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-815758: (2.078160201s)
helpers_test.go:176: Cleaning up "first-812968" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-812968
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-812968: (2.357214009s)
--- PASS: TestMinikubeProfile (62.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-903808 --memory=3072 --mount-string /tmp/TestMountStartserial3314940452/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-903808 --memory=3072 --mount-string /tmp/TestMountStartserial3314940452/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.20147647s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-903808 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-905564 --memory=3072 --mount-string /tmp/TestMountStartserial3314940452/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-905564 --memory=3072 --mount-string /tmp/TestMountStartserial3314940452/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.844459075s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-905564 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-903808 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-903808 --alsologtostderr -v=5: (1.719044262s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-905564 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-905564
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-905564: (1.284725562s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-905564
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-905564: (6.583379613s)
--- PASS: TestMountStart/serial/RestartStopped (7.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-905564 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-310911 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1227 08:51:47.166777    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 08:51:53.638332    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-310911 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m13.614481549s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-310911 -- rollout status deployment/busybox: (2.881163508s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-jm95m -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-vfgrc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-jm95m -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-vfgrc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-jm95m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-vfgrc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-jm95m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-jm95m -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-vfgrc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-310911 -- exec busybox-769dd8b7dd-vfgrc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-310911 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-310911 -v=5 --alsologtostderr: (25.085852832s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-310911 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp testdata/cp-test.txt multinode-310911:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile342113648/001/cp-test_multinode-310911.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911:/home/docker/cp-test.txt multinode-310911-m02:/home/docker/cp-test_multinode-310911_multinode-310911-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m02 "sudo cat /home/docker/cp-test_multinode-310911_multinode-310911-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911:/home/docker/cp-test.txt multinode-310911-m03:/home/docker/cp-test_multinode-310911_multinode-310911-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m03 "sudo cat /home/docker/cp-test_multinode-310911_multinode-310911-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp testdata/cp-test.txt multinode-310911-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile342113648/001/cp-test_multinode-310911-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911-m02:/home/docker/cp-test.txt multinode-310911:/home/docker/cp-test_multinode-310911-m02_multinode-310911.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911 "sudo cat /home/docker/cp-test_multinode-310911-m02_multinode-310911.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911-m02:/home/docker/cp-test.txt multinode-310911-m03:/home/docker/cp-test_multinode-310911-m02_multinode-310911-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m03 "sudo cat /home/docker/cp-test_multinode-310911-m02_multinode-310911-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp testdata/cp-test.txt multinode-310911-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile342113648/001/cp-test_multinode-310911-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911-m03:/home/docker/cp-test.txt multinode-310911:/home/docker/cp-test_multinode-310911-m03_multinode-310911.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911 "sudo cat /home/docker/cp-test_multinode-310911-m03_multinode-310911.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 cp multinode-310911-m03:/home/docker/cp-test.txt multinode-310911-m02:/home/docker/cp-test_multinode-310911-m03_multinode-310911-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 ssh -n multinode-310911-m02 "sudo cat /home/docker/cp-test_multinode-310911-m03_multinode-310911-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-310911 node stop m03: (1.336784037s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-310911 status: exit status 7 (530.490313ms)

                                                
                                                
-- stdout --
	multinode-310911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-310911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-310911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-310911 status --alsologtostderr: exit status 7 (534.471937ms)

                                                
                                                
-- stdout --
	multinode-310911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-310911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-310911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:53:33.014494  126598 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:53:33.014759  126598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:53:33.014792  126598 out.go:374] Setting ErrFile to fd 2...
	I1227 08:53:33.014814  126598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:53:33.015134  126598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:53:33.015395  126598 out.go:368] Setting JSON to false
	I1227 08:53:33.015466  126598 mustload.go:66] Loading cluster: multinode-310911
	I1227 08:53:33.015571  126598 notify.go:221] Checking for updates...
	I1227 08:53:33.016007  126598 config.go:182] Loaded profile config "multinode-310911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:53:33.016057  126598 status.go:174] checking status of multinode-310911 ...
	I1227 08:53:33.016669  126598 cli_runner.go:164] Run: docker container inspect multinode-310911 --format={{.State.Status}}
	I1227 08:53:33.039669  126598 status.go:371] multinode-310911 host status = "Running" (err=<nil>)
	I1227 08:53:33.039691  126598 host.go:66] Checking if "multinode-310911" exists ...
	I1227 08:53:33.040044  126598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-310911
	I1227 08:53:33.066831  126598 host.go:66] Checking if "multinode-310911" exists ...
	I1227 08:53:33.067123  126598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:53:33.067168  126598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-310911
	I1227 08:53:33.085541  126598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/multinode-310911/id_rsa Username:docker}
	I1227 08:53:33.181595  126598 ssh_runner.go:195] Run: systemctl --version
	I1227 08:53:33.188309  126598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:53:33.202182  126598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 08:53:33.262504  126598 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 08:53:33.251950947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 08:53:33.263134  126598 kubeconfig.go:125] found "multinode-310911" server: "https://192.168.67.2:8443"
	I1227 08:53:33.263176  126598 api_server.go:166] Checking apiserver status ...
	I1227 08:53:33.263223  126598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 08:53:33.276239  126598 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1383/cgroup
	I1227 08:53:33.285300  126598 api_server.go:192] apiserver freezer: "7:freezer:/docker/08388c95411fe6279de98e02e882a3b57da16cd5cef0483a795687e40b09d73c/kubepods/burstable/podb50d24a627e127412d9dcfde72333149/0692c5adb5f0ef96953c42c7fc66f4206ec917a123d04d98c6f6aa225532816c"
	I1227 08:53:33.285380  126598 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/08388c95411fe6279de98e02e882a3b57da16cd5cef0483a795687e40b09d73c/kubepods/burstable/podb50d24a627e127412d9dcfde72333149/0692c5adb5f0ef96953c42c7fc66f4206ec917a123d04d98c6f6aa225532816c/freezer.state
	I1227 08:53:33.293556  126598 api_server.go:214] freezer state: "THAWED"
	I1227 08:53:33.293583  126598 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 08:53:33.302338  126598 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 08:53:33.302365  126598 status.go:463] multinode-310911 apiserver status = Running (err=<nil>)
	I1227 08:53:33.302396  126598 status.go:176] multinode-310911 status: &{Name:multinode-310911 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:53:33.302421  126598 status.go:174] checking status of multinode-310911-m02 ...
	I1227 08:53:33.302758  126598 cli_runner.go:164] Run: docker container inspect multinode-310911-m02 --format={{.State.Status}}
	I1227 08:53:33.320277  126598 status.go:371] multinode-310911-m02 host status = "Running" (err=<nil>)
	I1227 08:53:33.320332  126598 host.go:66] Checking if "multinode-310911-m02" exists ...
	I1227 08:53:33.320670  126598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-310911-m02
	I1227 08:53:33.338563  126598 host.go:66] Checking if "multinode-310911-m02" exists ...
	I1227 08:53:33.338883  126598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 08:53:33.338929  126598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-310911-m02
	I1227 08:53:33.357747  126598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22344-2451/.minikube/machines/multinode-310911-m02/id_rsa Username:docker}
	I1227 08:53:33.457383  126598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 08:53:33.470094  126598 status.go:176] multinode-310911-m02 status: &{Name:multinode-310911-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:53:33.470129  126598 status.go:174] checking status of multinode-310911-m03 ...
	I1227 08:53:33.470443  126598 cli_runner.go:164] Run: docker container inspect multinode-310911-m03 --format={{.State.Status}}
	I1227 08:53:33.487632  126598 status.go:371] multinode-310911-m03 host status = "Stopped" (err=<nil>)
	I1227 08:53:33.487674  126598 status.go:384] host is not running, skipping remaining checks
	I1227 08:53:33.487682  126598 status.go:176] multinode-310911-m03 status: &{Name:multinode-310911-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-310911 node start m03 -v=5 --alsologtostderr: (6.944453069s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-310911
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-310911
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-310911: (25.150883352s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-310911 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-310911 --wait=true -v=5 --alsologtostderr: (54.020216622s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-310911
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-310911 node delete m03: (5.04937138s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-310911 stop: (24.034506712s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-310911 status: exit status 7 (94.369473ms)

                                                
                                                
-- stdout --
	multinode-310911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-310911-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-310911 status --alsologtostderr: exit status 7 (92.795944ms)

                                                
                                                
-- stdout --
	multinode-310911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-310911-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 08:55:30.532091  135413 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:55:30.532210  135413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:55:30.532221  135413 out.go:374] Setting ErrFile to fd 2...
	I1227 08:55:30.532227  135413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:55:30.532569  135413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:55:30.532996  135413 out.go:368] Setting JSON to false
	I1227 08:55:30.533054  135413 mustload.go:66] Loading cluster: multinode-310911
	I1227 08:55:30.533298  135413 notify.go:221] Checking for updates...
	I1227 08:55:30.533532  135413 config.go:182] Loaded profile config "multinode-310911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:55:30.533551  135413 status.go:174] checking status of multinode-310911 ...
	I1227 08:55:30.534110  135413 cli_runner.go:164] Run: docker container inspect multinode-310911 --format={{.State.Status}}
	I1227 08:55:30.551356  135413 status.go:371] multinode-310911 host status = "Stopped" (err=<nil>)
	I1227 08:55:30.551381  135413 status.go:384] host is not running, skipping remaining checks
	I1227 08:55:30.551388  135413 status.go:176] multinode-310911 status: &{Name:multinode-310911 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 08:55:30.551417  135413 status.go:174] checking status of multinode-310911-m02 ...
	I1227 08:55:30.551721  135413 cli_runner.go:164] Run: docker container inspect multinode-310911-m02 --format={{.State.Status}}
	I1227 08:55:30.577597  135413 status.go:371] multinode-310911-m02 host status = "Stopped" (err=<nil>)
	I1227 08:55:30.577622  135413 status.go:384] host is not running, skipping remaining checks
	I1227 08:55:30.577628  135413 status.go:176] multinode-310911-m02 status: &{Name:multinode-310911-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-310911 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1227 08:55:30.589778    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-310911 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.868828874s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-310911 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-310911
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-310911-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-310911-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.239832ms)

                                                
                                                
-- stdout --
	* [multinode-310911-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-310911-m02' is duplicated with machine name 'multinode-310911-m02' in profile 'multinode-310911'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-310911-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-310911-m03 --driver=docker  --container-runtime=containerd: (27.552751468s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-310911
E1227 08:56:47.166488    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-310911: exit status 80 (380.903212ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-310911 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-310911-m03 already exists in multinode-310911-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-310911-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-310911-m03: (2.121961481s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.20s)

                                                
                                    
x
+
TestScheduledStopUnix (100.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-581367 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-581367 --memory=3072 --driver=docker  --container-runtime=containerd: (24.215248582s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-581367 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 08:57:17.867422  144971 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:57:17.867597  144971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:57:17.867611  144971 out.go:374] Setting ErrFile to fd 2...
	I1227 08:57:17.867618  144971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:57:17.867907  144971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:57:17.868219  144971 out.go:368] Setting JSON to false
	I1227 08:57:17.868368  144971 mustload.go:66] Loading cluster: scheduled-stop-581367
	I1227 08:57:17.868756  144971 config.go:182] Loaded profile config "scheduled-stop-581367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:57:17.868854  144971 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/scheduled-stop-581367/config.json ...
	I1227 08:57:17.869088  144971 mustload.go:66] Loading cluster: scheduled-stop-581367
	I1227 08:57:17.869240  144971 config.go:182] Loaded profile config "scheduled-stop-581367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-581367 -n scheduled-stop-581367
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-581367 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 08:57:18.326829  145063 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:57:18.327001  145063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:57:18.327030  145063 out.go:374] Setting ErrFile to fd 2...
	I1227 08:57:18.327052  145063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:57:18.327459  145063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:57:18.327903  145063 out.go:368] Setting JSON to false
	I1227 08:57:18.330460  145063 daemonize_unix.go:73] killing process 144987 as it is an old scheduled stop
	I1227 08:57:18.330801  145063 mustload.go:66] Loading cluster: scheduled-stop-581367
	I1227 08:57:18.332355  145063 config.go:182] Loaded profile config "scheduled-stop-581367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:57:18.332462  145063 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/scheduled-stop-581367/config.json ...
	I1227 08:57:18.332691  145063 mustload.go:66] Loading cluster: scheduled-stop-581367
	I1227 08:57:18.332808  145063 config.go:182] Loaded profile config "scheduled-stop-581367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 08:57:18.339017    4288 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/scheduled-stop-581367/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-581367 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-581367 -n scheduled-stop-581367
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-581367
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-581367 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 08:57:44.313994  145755 out.go:360] Setting OutFile to fd 1 ...
	I1227 08:57:44.314184  145755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:57:44.314210  145755 out.go:374] Setting ErrFile to fd 2...
	I1227 08:57:44.314230  145755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 08:57:44.314515  145755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 08:57:44.314799  145755 out.go:368] Setting JSON to false
	I1227 08:57:44.314936  145755 mustload.go:66] Loading cluster: scheduled-stop-581367
	I1227 08:57:44.315309  145755 config.go:182] Loaded profile config "scheduled-stop-581367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 08:57:44.315419  145755 profile.go:143] Saving config to /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/scheduled-stop-581367/config.json ...
	I1227 08:57:44.315640  145755 mustload.go:66] Loading cluster: scheduled-stop-581367
	I1227 08:57:44.315797  145755 config.go:182] Loaded profile config "scheduled-stop-581367": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1227 08:58:10.213063    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-581367
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-581367: exit status 7 (69.136639ms)

                                                
                                                
-- stdout --
	scheduled-stop-581367
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-581367 -n scheduled-stop-581367
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-581367 -n scheduled-stop-581367: exit status 7 (68.707451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-581367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-581367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-581367: (4.548475748s)
--- PASS: TestScheduledStopUnix (100.41s)

                                                
                                    
x
+
TestInsufficientStorage (12.38s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-576634 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-576634 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.826761499s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8927c346-b160-4e70-9869-cf06d86b7c46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-576634] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6e909be-4f88-49e8-8c1e-bd007c28d955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22344"}}
	{"specversion":"1.0","id":"e13e001b-26b0-43d8-a338-fcc4613bf3b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1ec4af17-5822-4ed7-8033-a1598151bd76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig"}}
	{"specversion":"1.0","id":"6122e51f-f84b-4f17-bf00-c179fa0c05cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube"}}
	{"specversion":"1.0","id":"f1668819-9766-496f-9128-0fe921caa7cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"83c5cf40-e406-42eb-aae4-63280aee9041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5231be39-05bf-40d9-80f5-341160f3820d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d648eee8-bf9a-4ac1-a3f4-56b1d62e67cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"142d353b-1529-4e57-9965-7ace0161f15c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd4788c8-8267-4ebb-87c1-becb44a0e146","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"cbfb222d-4a10-43de-84fc-1342d5263bda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-576634\" primary control-plane node in \"insufficient-storage-576634\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ec9d71d-dd56-4749-8ce1-82f5911d71f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2601383b-1937-4a31-83f7-2769f741194b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"56538b5d-489b-45f1-b4ff-ea212a5caa1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-576634 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-576634 --output=json --layout=cluster: exit status 7 (297.68752ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-576634","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-576634","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 08:58:44.117005  147621 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-576634" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-576634 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-576634 --output=json --layout=cluster: exit status 7 (301.349533ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-576634","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-576634","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 08:58:44.418462  147687 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-576634" does not appear in /home/jenkins/minikube-integration/22344-2451/kubeconfig
	E1227 08:58:44.428806  147687 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/insufficient-storage-576634/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-576634" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-576634
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-576634: (1.951413151s)
--- PASS: TestInsufficientStorage (12.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.702995903 start -p running-upgrade-675913 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.702995903 start -p running-upgrade-675913 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.734166296s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-675913 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-675913 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.521135999s)
helpers_test.go:176: Cleaning up "running-upgrade-675913" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-675913
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-675913: (2.610735363s)
--- PASS: TestRunningBinaryUpgrade (63.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (348.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-535230 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1227 09:00:30.589457    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-535230 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.645244775s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-535230 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-535230 --alsologtostderr: (1.374483931s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-535230 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-535230 status --format={{.Host}}: exit status 7 (88.742873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-535230 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-535230 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.033640072s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-535230 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-535230 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-535230 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (117.022185ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-535230] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-535230
	    minikube start -p kubernetes-upgrade-535230 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5352302 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-535230 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-535230 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1227 09:05:30.589363    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-535230 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.933207211s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-535230" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-535230
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-535230: (2.496266854s)
--- PASS: TestKubernetesUpgrade (348.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (150.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.923093971 start -p missing-upgrade-702240 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.923093971 start -p missing-upgrade-702240 --memory=3072 --driver=docker  --container-runtime=containerd: (1m0.473428084s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-702240
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-702240
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-702240 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-702240 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m25.329695472s)
helpers_test.go:176: Cleaning up "missing-upgrade-702240" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-702240
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-702240: (2.38885789s)
--- PASS: TestMissingContainerUpgrade (150.04s)

                                                
                                    
x
+
TestPause/serial/Start (58.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-016458 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-016458 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (58.045991914s)
--- PASS: TestPause/serial/Start (58.05s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.08s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-016458 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-016458 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.054611939s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.08s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-016458 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-016458 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-016458 --output=json --layout=cluster: exit status 2 (335.211459ms)

                                                
                                                
-- stdout --
	{"Name":"pause-016458","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-016458","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-016458 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-016458 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-016458 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-016458 --alsologtostderr -v=5: (3.024839593s)
--- PASS: TestPause/serial/DeletePaused (3.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-016458
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-016458: exit status 1 (18.094583ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-016458: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (53.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1530569158 start -p stopped-upgrade-043523 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1227 09:01:47.166777    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1530569158 start -p stopped-upgrade-043523 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.756264454s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1530569158 -p stopped-upgrade-043523 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1530569158 -p stopped-upgrade-043523 stop: (1.288127052s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-043523 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-043523 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (18.770189279s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (53.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-043523
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-043523: (1.949360141s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.95s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (62.62s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-270805 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-270805 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (55.63294818s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-270805 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-270805
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-270805: (6.056170651s)
--- PASS: TestPreload/Start-NoPreload-PullImage (62.62s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (51.52s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-270805 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-270805 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (51.228431891s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-270805 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (51.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-907240 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-907240 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (96.453742ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-907240] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-907240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-907240 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (28.437548146s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-907240 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-907240 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-907240 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (20.477526264s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-907240 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-907240 status -o json: exit status 2 (299.407633ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-907240","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-907240
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-907240: (1.984499673s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-907240 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-907240 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.722784209s)
--- PASS: TestNoKubernetes/serial/Start (7.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22344-2451/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-907240 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-907240 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.144276ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
E1227 09:06:47.166485    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-907240
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-907240: (1.32337396s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-907240 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-907240 --driver=docker  --container-runtime=containerd: (6.62736945s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-907240 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-907240 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.704962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-224878 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-224878 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (205.640232ms)

                                                
                                                
-- stdout --
	* [false-224878] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:07:01.456290  198171 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:07:01.456752  198171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:01.456789  198171 out.go:374] Setting ErrFile to fd 2...
	I1227 09:07:01.456809  198171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:07:01.457326  198171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22344-2451/.minikube/bin
	I1227 09:07:01.463995  198171 out.go:368] Setting JSON to false
	I1227 09:07:01.469561  198171 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2975,"bootTime":1766823447,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1227 09:07:01.469711  198171 start.go:143] virtualization:  
	I1227 09:07:01.473339  198171 out.go:179] * [false-224878] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:07:01.477218  198171 out.go:179]   - MINIKUBE_LOCATION=22344
	I1227 09:07:01.477416  198171 notify.go:221] Checking for updates...
	I1227 09:07:01.482931  198171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:07:01.485938  198171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22344-2451/kubeconfig
	I1227 09:07:01.488841  198171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22344-2451/.minikube
	I1227 09:07:01.491841  198171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:07:01.494762  198171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:07:01.498386  198171 config.go:182] Loaded profile config "force-systemd-env-145961": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:07:01.498488  198171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:07:01.542930  198171 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:07:01.543098  198171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:07:01.599940  198171 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 09:07:01.590734384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:07:01.600080  198171 docker.go:319] overlay module found
	I1227 09:07:01.603247  198171 out.go:179] * Using the docker driver based on user configuration
	I1227 09:07:01.606080  198171 start.go:309] selected driver: docker
	I1227 09:07:01.606106  198171 start.go:928] validating driver "docker" against <nil>
	I1227 09:07:01.606120  198171 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:07:01.609554  198171 out.go:203] 
	W1227 09:07:01.612417  198171 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1227 09:07:01.615248  198171 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-224878 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-224878" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-224878

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-224878"

                                                
                                                
----------------------- debugLogs end: false-224878 [took: 3.202451879s] --------------------------------
helpers_test.go:176: Cleaning up "false-224878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-224878
--- PASS: TestNetworkPlugins/group/false (3.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-046838 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1227 09:14:50.213359    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-046838 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (58.513027623s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-046838 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ab890505-796b-4b99-9e7d-818ffe90167d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [ab890505-796b-4b99-9e7d-818ffe90167d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0036688s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-046838 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-046838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-046838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.089563993s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-046838 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-046838 --alsologtostderr -v=3
E1227 09:15:30.589746    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-046838 --alsologtostderr -v=3: (12.115821329s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-046838 -n old-k8s-version-046838
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-046838 -n old-k8s-version-046838: exit status 7 (64.411835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-046838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-046838 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-046838 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.58079575s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-046838 -n old-k8s-version-046838
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-lq994" [76065eb3-cfab-4293-a1e5-072ad48bf513] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00768537s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-lq994" [76065eb3-cfab-4293-a1e5-072ad48bf513] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003935913s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-046838 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-046838 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-046838 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-046838 -n old-k8s-version-046838
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-046838 -n old-k8s-version-046838: exit status 2 (321.621179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-046838 -n old-k8s-version-046838
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-046838 -n old-k8s-version-046838: exit status 2 (312.852507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-046838 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-046838 -n old-k8s-version-046838
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-046838 -n old-k8s-version-046838
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-524171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1227 09:16:47.165743    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-524171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.617961182s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-524171 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2f0b0dc7-ec16-49a2-b02e-12d1e520d18c] Pending
helpers_test.go:353: "busybox" [2f0b0dc7-ec16-49a2-b02e-12d1e520d18c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2f0b0dc7-ec16-49a2-b02e-12d1e520d18c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003009539s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-524171 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-524171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-524171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.189657841s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-524171 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-524171 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-524171 --alsologtostderr -v=3: (12.154607498s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-524171 -n no-preload-524171
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-524171 -n no-preload-524171: exit status 7 (67.464365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-524171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-524171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-524171 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (48.58588534s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-524171 -n no-preload-524171
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-9xdd8" [c2e83a23-51a5-4e89-9292-35aa07eb571f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003409885s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-9xdd8" [c2e83a23-51a5-4e89-9292-35aa07eb571f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002949717s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-524171 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-524171 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-524171 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-524171 -n no-preload-524171
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-524171 -n no-preload-524171: exit status 2 (330.537663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-524171 -n no-preload-524171
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-524171 -n no-preload-524171: exit status 2 (356.655768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-524171 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-524171 -n no-preload-524171
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-524171 -n no-preload-524171
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-895607 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-895607 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.168387947s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-720207 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-720207 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.921861481s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-895607 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0030db38-d74a-49da-b074-c8eff2f9acad] Pending
helpers_test.go:353: "busybox" [0030db38-d74a-49da-b074-c8eff2f9acad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0030db38-d74a-49da-b074-c8eff2f9acad] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0043031s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-895607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-720207 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6255e885-e9c9-4dc5-9fc4-068922883f58] Pending
helpers_test.go:353: "busybox" [6255e885-e9c9-4dc5-9fc4-068922883f58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6255e885-e9c9-4dc5-9fc4-068922883f58] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004681989s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-720207 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-895607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-895607 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-895607 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-895607 --alsologtostderr -v=3: (12.25146965s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-720207 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-720207 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.154779551s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-720207 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-720207 --alsologtostderr -v=3
E1227 09:20:11.698315    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:11.703677    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:11.714024    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:11.734451    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:11.774823    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:11.855746    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:12.016720    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:12.337759    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:12.978667    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:14.258945    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:16.819504    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-720207 --alsologtostderr -v=3: (12.12285359s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-895607 -n embed-certs-895607
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-895607 -n embed-certs-895607: exit status 7 (70.706516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-895607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-895607 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1227 09:20:21.939785    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-895607 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (53.2277679s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-895607 -n embed-certs-895607
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207: exit status 7 (66.732848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-720207 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-720207 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1227 09:20:30.588779    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:32.180937    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:20:52.661151    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-720207 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (54.123709645s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-hcshx" [5189e9f5-77c7-441e-9476-7fcd294b3701] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003004382s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-pf4sk" [7fad918a-01ac-4e21-b164-c4e1d555a455] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004382727s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-hcshx" [5189e9f5-77c7-441e-9476-7fcd294b3701] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003700362s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-895607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-pf4sk" [7fad918a-01ac-4e21-b164-c4e1d555a455] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004392652s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-720207 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-895607 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-895607 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-895607 -n embed-certs-895607
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-895607 -n embed-certs-895607: exit status 2 (329.895286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-895607 -n embed-certs-895607
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-895607 -n embed-certs-895607: exit status 2 (328.432095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-895607 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-895607 -n embed-certs-895607
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-895607 -n embed-certs-895607
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-720207 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-720207 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-720207 --alsologtostderr -v=1: (1.272062447s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207: exit status 2 (487.653319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207: exit status 2 (400.495912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-720207 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-720207 -n default-k8s-diff-port-720207
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-261321 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-261321 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (36.892416131s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.89s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.84s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-603182 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-603182 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (4.645417438s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-603182" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-603182
--- PASS: TestPreload/PreloadSrc/gcs (4.84s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.36s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-036877 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-036877 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (4.120351182s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-036877" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-036877
--- PASS: TestPreload/PreloadSrc/github (4.36s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.71s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-452506 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-452506" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-452506
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1227 09:21:47.166308    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (53.760478924s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-261321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-261321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.966511676s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-261321 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-261321 --alsologtostderr -v=3: (1.624916468s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261321 -n newest-cni-261321
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261321 -n newest-cni-261321: exit status 7 (139.649093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-261321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-261321 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-261321 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (17.771951246s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261321 -n newest-cni-261321
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-261321 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-261321 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-261321 -n newest-cni-261321
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-261321 -n newest-cni-261321: exit status 2 (338.89962ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-261321 -n newest-cni-261321
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-261321 -n newest-cni-261321: exit status 2 (332.914714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-261321 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-261321 -n newest-cni-261321
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-261321 -n newest-cni-261321
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.11s)
E1227 09:27:35.554471    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:41.648813    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:41.654164    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:41.664515    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:41.684866    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:41.725307    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:41.805665    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:41.966084    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:42.286683    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:42.927724    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:27:44.208098    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1227 09:22:38.113416    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:22:40.674212    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (50.881837922s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-224878 "pgrep -a kubelet"
I1227 09:22:41.298089    4288 config.go:182] Loaded profile config "auto-224878": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-224878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7h5zv" [c1ef9c5c-479c-409b-8881-c45657e8ed19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 09:22:45.795824    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/no-preload-524171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-7h5zv" [c1ef9c5c-479c-409b-8881-c45657e8ed19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005148596s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-224878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m11.81859751s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-mjslp" [8ab5087e-240e-4a1d-b4ed-268ae3f268f0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005713631s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-224878 "pgrep -a kubelet"
I1227 09:23:34.895462    4288 config.go:182] Loaded profile config "kindnet-224878": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-224878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rq95p" [6d8a1697-b775-4677-ad96-948e10995660] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rq95p" [6d8a1697-b775-4677-ad96-948e10995660] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004117798s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-224878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (51.048838893s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-brmct" [a0eb8385-a76d-44e5-aacb-7c45cae1d0ca] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003841363s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-224878 "pgrep -a kubelet"
I1227 09:24:36.331931    4288 config.go:182] Loaded profile config "calico-224878": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-224878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-gp4dw" [6a9b41c4-3747-4555-ace8-f76b7ccd4cd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-gp4dw" [6a9b41c4-3747-4555-ace8-f76b7ccd4cd5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.006188947s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-224878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-224878 "pgrep -a kubelet"
E1227 09:25:02.324402    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/default-k8s-diff-port-720207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1227 09:25:02.638318    4288 config.go:182] Loaded profile config "custom-flannel-224878": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-224878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lj9kt" [0493ef35-358c-43e2-b29b-24d1c4eebfc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 09:25:03.604769    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/default-k8s-diff-port-720207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:06.165807    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/default-k8s-diff-port-720207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-lj9kt" [0493ef35-358c-43e2-b29b-24d1c4eebfc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003370285s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1227 09:25:11.286970    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/default-k8s-diff-port-720207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:25:11.698383    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/old-k8s-version-046838/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.309415816s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-224878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1227 09:25:13.641292    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/addons-130695/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1227 09:25:42.008189    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/default-k8s-diff-port-720207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:22.968342    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/default-k8s-diff-port-720207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.333731426s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-224878 "pgrep -a kubelet"
I1227 09:26:26.900783    4288 config.go:182] Loaded profile config "enable-default-cni-224878": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-224878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-f6697" [08a64a25-0c52-4492-87d2-3ef5d1657b2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-f6697" [08a64a25-0c52-4492-87d2-3ef5d1657b2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00389369s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-jqm7d" [caacadea-4a4e-4900-ad0b-24480144603d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004038185s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-224878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-224878 "pgrep -a kubelet"
I1227 09:26:38.900787    4288 config.go:182] Loaded profile config "flannel-224878": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-224878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-tlnrr" [8920b113-6e90-4b54-8211-e68776538ca9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-tlnrr" [8920b113-6e90-4b54-8211-e68776538ca9] Running
E1227 09:26:47.166237    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/functional-562438/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003399964s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-224878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-224878 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (47.326257952s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-224878 "pgrep -a kubelet"
E1227 09:27:44.889197    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/default-k8s-diff-port-720207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1227 09:27:44.973962    4288 config.go:182] Loaded profile config "bridge-224878": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-224878 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rcgnr" [47935386-1fcd-4714-980d-75c5459f412f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 09:27:46.769142    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-rcgnr" [47935386-1fcd-4714-980d-75c5459f412f] Running
E1227 09:27:51.890365    4288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22344-2451/.minikube/profiles/auto-224878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00320727s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-224878 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-224878 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (30/337)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-704034 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-704034" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-704034
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-641074" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-641074
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-224878 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-224878" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-224878

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-224878"

                                                
                                                
----------------------- debugLogs end: kubenet-224878 [took: 3.466317862s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-224878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-224878
--- SKIP: TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-224878 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-224878" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-224878

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-224878" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-224878"

                                                
                                                
----------------------- debugLogs end: cilium-224878 [took: 3.757602843s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-224878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-224878
--- SKIP: TestNetworkPlugins/group/cilium (3.92s)

                                                
                                    
Copied to clipboard