Test Report: Docker_Linux_containerd_arm64 22427

                    
                      f815509b9ccb41a33be05aa7241c338e7909bf25:2026-01-10:43184
                    
                

Test fail (3/337)

Order failed test Duration
52 TestForceSystemdFlag 503.98
53 TestForceSystemdEnv 506.28
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.16
x
+
TestForceSystemdFlag (503.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-447307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0110 09:07:17.255612    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-447307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m19.946280713s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-447307] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-447307" primary control-plane node in "force-systemd-flag-447307" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:05:33.409464  209870 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:05:33.409589  209870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:05:33.409600  209870 out.go:374] Setting ErrFile to fd 2...
	I0110 09:05:33.409606  209870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:05:33.409935  209870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 09:05:33.410391  209870 out.go:368] Setting JSON to false
	I0110 09:05:33.411232  209870 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2887,"bootTime":1768033047,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 09:05:33.411330  209870 start.go:143] virtualization:  
	I0110 09:05:33.415136  209870 out.go:179] * [force-systemd-flag-447307] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:05:33.419458  209870 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:05:33.419525  209870 notify.go:221] Checking for updates...
	I0110 09:05:33.425912  209870 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:05:33.429209  209870 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 09:05:33.432347  209870 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 09:05:33.435460  209870 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:05:33.438570  209870 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:05:33.442372  209870 config.go:182] Loaded profile config "force-systemd-env-562333": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 09:05:33.442573  209870 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:05:33.476555  209870 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:05:33.476689  209870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:05:33.532767  209870 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:05:33.522514518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:05:33.532884  209870 docker.go:319] overlay module found
	I0110 09:05:33.536216  209870 out.go:179] * Using the docker driver based on user configuration
	I0110 09:05:33.539266  209870 start.go:309] selected driver: docker
	I0110 09:05:33.539290  209870 start.go:928] validating driver "docker" against <nil>
	I0110 09:05:33.539304  209870 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:05:33.540257  209870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:05:33.606717  209870 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:05:33.597485332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:05:33.606880  209870 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:05:33.607150  209870 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 09:05:33.610109  209870 out.go:179] * Using Docker driver with root privileges
	I0110 09:05:33.613052  209870 cni.go:84] Creating CNI manager for ""
	I0110 09:05:33.613123  209870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 09:05:33.613137  209870 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 09:05:33.613210  209870 start.go:353] cluster config:
	{Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I0110 09:05:33.616369  209870 out.go:179] * Starting "force-systemd-flag-447307" primary control-plane node in "force-systemd-flag-447307" cluster
	I0110 09:05:33.619253  209870 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0110 09:05:33.622283  209870 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:05:33.625240  209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:05:33.625284  209870 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I0110 09:05:33.625294  209870 cache.go:65] Caching tarball of preloaded images
	I0110 09:05:33.625329  209870 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:05:33.625389  209870 preload.go:251] Found /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 09:05:33.625401  209870 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I0110 09:05:33.625502  209870 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json ...
	I0110 09:05:33.625518  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json: {Name:mkf2d31f6f9a10b94727bf46c1c457843d8705ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:33.646574  209870 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 09:05:33.646596  209870 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 09:05:33.646616  209870 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:05:33.646655  209870 start.go:360] acquireMachinesLock for force-systemd-flag-447307: {Name:mkd48671d04edb3bc812df6ed361a4acb7311dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:05:33.646759  209870 start.go:364] duration metric: took 84.121µs to acquireMachinesLock for "force-systemd-flag-447307"
	I0110 09:05:33.646788  209870 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0110 09:05:33.646856  209870 start.go:125] createHost starting for "" (driver="docker")
	I0110 09:05:33.650271  209870 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 09:05:33.650508  209870 start.go:159] libmachine.API.Create for "force-systemd-flag-447307" (driver="docker")
	I0110 09:05:33.650544  209870 client.go:173] LocalClient.Create starting
	I0110 09:05:33.650632  209870 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem
	I0110 09:05:33.650669  209870 main.go:144] libmachine: Decoding PEM data...
	I0110 09:05:33.650699  209870 main.go:144] libmachine: Parsing certificate...
	I0110 09:05:33.650748  209870 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem
	I0110 09:05:33.650798  209870 main.go:144] libmachine: Decoding PEM data...
	I0110 09:05:33.650814  209870 main.go:144] libmachine: Parsing certificate...
	I0110 09:05:33.651204  209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 09:05:33.667215  209870 cli_runner.go:211] docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 09:05:33.667320  209870 network_create.go:284] running [docker network inspect force-systemd-flag-447307] to gather additional debugging logs...
	I0110 09:05:33.667372  209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307
	W0110 09:05:33.687489  209870 cli_runner.go:211] docker network inspect force-systemd-flag-447307 returned with exit code 1
	I0110 09:05:33.687524  209870 network_create.go:287] error running [docker network inspect force-systemd-flag-447307]: docker network inspect force-systemd-flag-447307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-447307 not found
	I0110 09:05:33.687543  209870 network_create.go:289] output of [docker network inspect force-systemd-flag-447307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-447307 not found
	
	** /stderr **
	I0110 09:05:33.687651  209870 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:05:33.707102  209870 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e01acd8ff726 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:8b:1d:1f:6a:28} reservation:<nil>}
	I0110 09:05:33.707525  209870 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab4f89e52867 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:d7:2a:6d:f4:96} reservation:<nil>}
	I0110 09:05:33.707892  209870 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8b226bd60dd7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:a9:74:b4} reservation:<nil>}
	I0110 09:05:33.708300  209870 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e0acd7192481 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:16:f7:84:76:30} reservation:<nil>}
	I0110 09:05:33.708837  209870 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001930810}
	I0110 09:05:33.708905  209870 network_create.go:124] attempt to create docker network force-systemd-flag-447307 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 09:05:33.708992  209870 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-447307 force-systemd-flag-447307
	I0110 09:05:33.788655  209870 network_create.go:108] docker network force-systemd-flag-447307 192.168.85.0/24 created
	I0110 09:05:33.788695  209870 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-447307" container
	I0110 09:05:33.788778  209870 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 09:05:33.805636  209870 cli_runner.go:164] Run: docker volume create force-systemd-flag-447307 --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --label created_by.minikube.sigs.k8s.io=true
	I0110 09:05:33.825622  209870 oci.go:103] Successfully created a docker volume force-systemd-flag-447307
	I0110 09:05:33.825717  209870 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-447307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --entrypoint /usr/bin/test -v force-systemd-flag-447307:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 09:05:34.392818  209870 oci.go:107] Successfully prepared a docker volume force-systemd-flag-447307
	I0110 09:05:34.392892  209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:05:34.392910  209870 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 09:05:34.392989  209870 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-447307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 09:05:38.304331  209870 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-447307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.911286648s)
	I0110 09:05:38.304367  209870 kic.go:203] duration metric: took 3.911453699s to extract preloaded images to volume ...
	W0110 09:05:38.304501  209870 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 09:05:38.304616  209870 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 09:05:38.370965  209870 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-447307 --name force-systemd-flag-447307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-447307 --network force-systemd-flag-447307 --ip 192.168.85.2 --volume force-systemd-flag-447307:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 09:05:38.714423  209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Running}}
	I0110 09:05:38.748378  209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
	I0110 09:05:38.768410  209870 cli_runner.go:164] Run: docker exec force-systemd-flag-447307 stat /var/lib/dpkg/alternatives/iptables
	I0110 09:05:38.820953  209870 oci.go:144] the created container "force-systemd-flag-447307" has a running status.
	I0110 09:05:38.820980  209870 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa...
	I0110 09:05:39.091967  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 09:05:39.092015  209870 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 09:05:39.120080  209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
	I0110 09:05:39.153524  209870 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 09:05:39.153549  209870 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-447307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 09:05:39.219499  209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
	I0110 09:05:39.244109  209870 machine.go:94] provisionDockerMachine start ...
	I0110 09:05:39.244193  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:39.268479  209870 main.go:144] libmachine: Using SSH client type: native
	I0110 09:05:39.268982  209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33044 <nil> <nil>}
	I0110 09:05:39.268997  209870 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 09:05:39.269646  209870 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 09:05:42.418786  209870 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-447307
	
	I0110 09:05:42.418815  209870 ubuntu.go:182] provisioning hostname "force-systemd-flag-447307"
	I0110 09:05:42.418891  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:42.436389  209870 main.go:144] libmachine: Using SSH client type: native
	I0110 09:05:42.436711  209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33044 <nil> <nil>}
	I0110 09:05:42.436734  209870 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-447307 && echo "force-systemd-flag-447307" | sudo tee /etc/hostname
	I0110 09:05:42.592363  209870 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-447307
	
	I0110 09:05:42.592444  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:42.609168  209870 main.go:144] libmachine: Using SSH client type: native
	I0110 09:05:42.609485  209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33044 <nil> <nil>}
	I0110 09:05:42.609511  209870 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-447307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-447307/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-447307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 09:05:42.763850  209870 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 09:05:42.763885  209870 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2439/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2439/.minikube}
	I0110 09:05:42.763907  209870 ubuntu.go:190] setting up certificates
	I0110 09:05:42.763917  209870 provision.go:84] configureAuth start
	I0110 09:05:42.763975  209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
	I0110 09:05:42.780237  209870 provision.go:143] copyHostCerts
	I0110 09:05:42.780278  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
	I0110 09:05:42.780310  209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem, removing ...
	I0110 09:05:42.780322  209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
	I0110 09:05:42.780397  209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem (1078 bytes)
	I0110 09:05:42.780483  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
	I0110 09:05:42.780504  209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem, removing ...
	I0110 09:05:42.780509  209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
	I0110 09:05:42.780582  209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem (1123 bytes)
	I0110 09:05:42.780638  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
	I0110 09:05:42.780660  209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem, removing ...
	I0110 09:05:42.780668  209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
	I0110 09:05:42.780694  209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem (1675 bytes)
	I0110 09:05:42.780745  209870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-447307 san=[127.0.0.1 192.168.85.2 force-systemd-flag-447307 localhost minikube]
	I0110 09:05:43.091195  209870 provision.go:177] copyRemoteCerts
	I0110 09:05:43.091276  209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 09:05:43.091317  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.112972  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.219219  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 09:05:43.219278  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 09:05:43.236969  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 09:05:43.237036  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 09:05:43.254736  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 09:05:43.254810  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 09:05:43.272505  209870 provision.go:87] duration metric: took 508.564973ms to configureAuth
	I0110 09:05:43.272534  209870 ubuntu.go:206] setting minikube options for container-runtime
	I0110 09:05:43.272716  209870 config.go:182] Loaded profile config "force-systemd-flag-447307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 09:05:43.272731  209870 machine.go:97] duration metric: took 4.028601641s to provisionDockerMachine
	I0110 09:05:43.272739  209870 client.go:176] duration metric: took 9.622186198s to LocalClient.Create
	I0110 09:05:43.272758  209870 start.go:167] duration metric: took 9.622250757s to libmachine.API.Create "force-systemd-flag-447307"
	I0110 09:05:43.272767  209870 start.go:293] postStartSetup for "force-systemd-flag-447307" (driver="docker")
	I0110 09:05:43.272776  209870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 09:05:43.272844  209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 09:05:43.272890  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.291040  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.399676  209870 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 09:05:43.403118  209870 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 09:05:43.403149  209870 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 09:05:43.403161  209870 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/addons for local assets ...
	I0110 09:05:43.403215  209870 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/files for local assets ...
	I0110 09:05:43.403296  209870 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> 42572.pem in /etc/ssl/certs
	I0110 09:05:43.403307  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> /etc/ssl/certs/42572.pem
	I0110 09:05:43.403441  209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 09:05:43.411262  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /etc/ssl/certs/42572.pem (1708 bytes)
	I0110 09:05:43.428973  209870 start.go:296] duration metric: took 156.191974ms for postStartSetup
	I0110 09:05:43.429327  209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
	I0110 09:05:43.449128  209870 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json ...
	I0110 09:05:43.449426  209870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:05:43.449470  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.469062  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.568491  209870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 09:05:43.573119  209870 start.go:128] duration metric: took 9.926249482s to createHost
	I0110 09:05:43.573146  209870 start.go:83] releasing machines lock for "force-systemd-flag-447307", held for 9.926372964s
	I0110 09:05:43.573217  209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
	I0110 09:05:43.590249  209870 ssh_runner.go:195] Run: cat /version.json
	I0110 09:05:43.590305  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.590572  209870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 09:05:43.590643  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.615492  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.617966  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.715397  209870 ssh_runner.go:195] Run: systemctl --version
	I0110 09:05:43.819535  209870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 09:05:43.823893  209870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 09:05:43.824019  209870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 09:05:43.851657  209870 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 09:05:43.851693  209870 start.go:496] detecting cgroup driver to use...
	I0110 09:05:43.851707  209870 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 09:05:43.851778  209870 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0110 09:05:43.867281  209870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 09:05:43.880163  209870 docker.go:218] disabling cri-docker service (if available) ...
	I0110 09:05:43.880224  209870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 09:05:43.897601  209870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 09:05:43.916022  209870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 09:05:44.034195  209870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 09:05:44.157732  209870 docker.go:234] disabling docker service ...
	I0110 09:05:44.157806  209870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 09:05:44.182671  209870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 09:05:44.199192  209870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 09:05:44.328856  209870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 09:05:44.450963  209870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 09:05:44.463783  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 09:05:44.479468  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 09:05:44.488749  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 09:05:44.497642  209870 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 09:05:44.497707  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 09:05:44.506787  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 09:05:44.516077  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 09:05:44.524994  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 09:05:44.533763  209870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 09:05:44.542113  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 09:05:44.551294  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 09:05:44.560593  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 09:05:44.569667  209870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 09:05:44.577424  209870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 09:05:44.585163  209870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:05:44.695011  209870 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 09:05:44.824179  209870 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I0110 09:05:44.824246  209870 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0110 09:05:44.828299  209870 start.go:574] Will wait 60s for crictl version
	I0110 09:05:44.828398  209870 ssh_runner.go:195] Run: which crictl
	I0110 09:05:44.831917  209870 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 09:05:44.856160  209870 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I0110 09:05:44.856261  209870 ssh_runner.go:195] Run: containerd --version
	I0110 09:05:44.877321  209870 ssh_runner.go:195] Run: containerd --version
	I0110 09:05:44.901544  209870 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I0110 09:05:44.904468  209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:05:44.921071  209870 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 09:05:44.924958  209870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:05:44.935964  209870 kubeadm.go:884] updating cluster {Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 09:05:44.936082  209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:05:44.936148  209870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:05:44.963934  209870 containerd.go:635] all images are preloaded for containerd runtime.
	I0110 09:05:44.963961  209870 containerd.go:542] Images already preloaded, skipping extraction
	I0110 09:05:44.964020  209870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:05:44.993776  209870 containerd.go:635] all images are preloaded for containerd runtime.
	I0110 09:05:44.993799  209870 cache_images.go:86] Images are preloaded, skipping loading
	I0110 09:05:44.993808  209870 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I0110 09:05:44.993914  209870 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-447307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 09:05:44.993982  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I0110 09:05:45.035190  209870 cni.go:84] Creating CNI manager for ""
	I0110 09:05:45.035214  209870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 09:05:45.035241  209870 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 09:05:45.035266  209870 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-447307 NodeName:force-systemd-flag-447307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 09:05:45.035486  209870 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-447307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 09:05:45.035574  209870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 09:05:45.067399  209870 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 09:05:45.067510  209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 09:05:45.092872  209870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0110 09:05:45.121773  209870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 09:05:45.154954  209870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I0110 09:05:45.231953  209870 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 09:05:45.237475  209870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:05:45.260281  209870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:05:45.419211  209870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:05:45.437873  209870 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307 for IP: 192.168.85.2
	I0110 09:05:45.437942  209870 certs.go:195] generating shared ca certs ...
	I0110 09:05:45.437991  209870 certs.go:227] acquiring lock for ca certs: {Name:mk2efb7c26990a28337b434f05b8d75a57c7c690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.438190  209870 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key
	I0110 09:05:45.438256  209870 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key
	I0110 09:05:45.438302  209870 certs.go:257] generating profile certs ...
	I0110 09:05:45.438386  209870 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key
	I0110 09:05:45.438435  209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt with IP's: []
	I0110 09:05:45.568734  209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt ...
	I0110 09:05:45.568768  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt: {Name:mk93119e0751f692d1add2634b06b07d570f7c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.568970  209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key ...
	I0110 09:05:45.568988  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key: {Name:mkd0ec99179f57a4bf574d82b9d5dd3231ca72d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.569084  209870 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d
	I0110 09:05:45.569103  209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 09:05:45.634799  209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d ...
	I0110 09:05:45.634831  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d: {Name:mk1f93a1a18d813cb88fd475e0986fb6bcc9bd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.635018  209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d ...
	I0110 09:05:45.635033  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d: {Name:mkc43e75e3e468932f9ce36624b08b9cf784c70c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.635122  209870 certs.go:382] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt
	I0110 09:05:45.635249  209870 certs.go:386] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key
	I0110 09:05:45.635318  209870 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key
	I0110 09:05:45.635336  209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt with IP's: []
	I0110 09:05:45.872246  209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt ...
	I0110 09:05:45.872281  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt: {Name:mka6e1c552726af90963b0c4641d45cc7689a203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.872469  209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key ...
	I0110 09:05:45.872484  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key: {Name:mk8e5271296bc709b5c836c748d108f6bf8306ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.872565  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 09:05:45.872587  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 09:05:45.872599  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 09:05:45.872615  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 09:05:45.872633  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 09:05:45.872650  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 09:05:45.872666  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 09:05:45.872681  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 09:05:45.872732  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem (1338 bytes)
	W0110 09:05:45.872776  209870 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257_empty.pem, impossibly tiny 0 bytes
	I0110 09:05:45.872788  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 09:05:45.872823  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem (1078 bytes)
	I0110 09:05:45.872851  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem (1123 bytes)
	I0110 09:05:45.872886  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem (1675 bytes)
	I0110 09:05:45.872937  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem (1708 bytes)
	I0110 09:05:45.872975  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem -> /usr/share/ca-certificates/4257.pem
	I0110 09:05:45.872997  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> /usr/share/ca-certificates/42572.pem
	I0110 09:05:45.873020  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:45.873565  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 09:05:45.894181  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 09:05:45.914336  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 09:05:45.933562  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 09:05:45.952739  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 09:05:45.971678  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 09:05:45.990590  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 09:05:46.009080  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 09:05:46.029612  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem --> /usr/share/ca-certificates/4257.pem (1338 bytes)
	I0110 09:05:46.049043  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /usr/share/ca-certificates/42572.pem (1708 bytes)
	I0110 09:05:46.066769  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 09:05:46.086243  209870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 09:05:46.099249  209870 ssh_runner.go:195] Run: openssl version
	I0110 09:05:46.106159  209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:46.113597  209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 09:05:46.121108  209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:46.124874  209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:46.124949  209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:46.165876  209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 09:05:46.173607  209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 09:05:46.181172  209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4257.pem
	I0110 09:05:46.189176  209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4257.pem /etc/ssl/certs/4257.pem
	I0110 09:05:46.197604  209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4257.pem
	I0110 09:05:46.202316  209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:27 /usr/share/ca-certificates/4257.pem
	I0110 09:05:46.202452  209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4257.pem
	I0110 09:05:46.244731  209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 09:05:46.252500  209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4257.pem /etc/ssl/certs/51391683.0
	I0110 09:05:46.260155  209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42572.pem
	I0110 09:05:46.267974  209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42572.pem /etc/ssl/certs/42572.pem
	I0110 09:05:46.276015  209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42572.pem
	I0110 09:05:46.280136  209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:27 /usr/share/ca-certificates/42572.pem
	I0110 09:05:46.280201  209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42572.pem
	I0110 09:05:46.321282  209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 09:05:46.328915  209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42572.pem /etc/ssl/certs/3ec20f2e.0
	I0110 09:05:46.336599  209870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 09:05:46.340309  209870 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 09:05:46.340362  209870 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:05:46.340440  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0110 09:05:46.340505  209870 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:05:46.368732  209870 cri.go:96] found id: ""
	I0110 09:05:46.368825  209870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 09:05:46.377083  209870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 09:05:46.385046  209870 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:05:46.385169  209870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:05:46.393422  209870 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:05:46.393446  209870 kubeadm.go:158] found existing configuration files:
	
	I0110 09:05:46.393528  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:05:46.402057  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:05:46.402155  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:05:46.409739  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:05:46.417579  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:05:46.417663  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:05:46.425416  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:05:46.433477  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:05:46.433598  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:05:46.442123  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:05:46.453573  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:05:46.453686  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:05:46.464707  209870 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:05:46.525523  209870 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:05:46.525947  209870 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:05:46.597987  209870 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:05:46.598061  209870 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:05:46.598113  209870 kubeadm.go:319] OS: Linux
	I0110 09:05:46.598166  209870 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:05:46.598220  209870 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:05:46.598270  209870 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:05:46.598320  209870 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:05:46.598379  209870 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:05:46.598434  209870 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:05:46.598482  209870 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:05:46.598540  209870 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:05:46.598589  209870 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:05:46.662544  209870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:05:46.662658  209870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:05:46.662754  209870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:05:46.671756  209870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:05:46.678150  209870 out.go:252]   - Generating certificates and keys ...
	I0110 09:05:46.678326  209870 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:05:46.678444  209870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:05:47.409478  209870 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 09:05:47.578923  209870 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 09:05:47.675285  209870 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 09:05:47.915407  209870 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 09:05:48.056354  209870 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 09:05:48.056768  209870 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 09:05:48.397487  209870 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 09:05:48.397857  209870 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 09:05:48.490818  209870 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 09:05:48.893329  209870 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 09:05:49.168813  209870 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 09:05:49.169088  209870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:05:49.386189  209870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:05:49.640500  209870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:05:50.248302  209870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:05:50.303575  209870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:05:50.498195  209870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:05:50.498841  209870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:05:50.501376  209870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:05:50.505131  209870 out.go:252]   - Booting up control plane ...
	I0110 09:05:50.505260  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:05:50.505353  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:05:50.505445  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:05:50.521530  209870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:05:50.521669  209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:05:50.530142  209870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:05:50.530443  209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:05:50.530495  209870 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:05:50.669341  209870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:05:50.669965  209870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:09:50.670468  209870 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000404613s
	I0110 09:09:50.675514  209870 kubeadm.go:319] 
	I0110 09:09:50.675647  209870 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:09:50.675714  209870 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:09:50.675911  209870 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:09:50.675926  209870 kubeadm.go:319] 
	I0110 09:09:50.676109  209870 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:09:50.676170  209870 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:09:50.676227  209870 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:09:50.676235  209870 kubeadm.go:319] 
	I0110 09:09:50.676744  209870 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:09:50.677480  209870 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:09:50.677676  209870 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:09:50.678147  209870 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 09:09:50.678158  209870 kubeadm.go:319] 
	I0110 09:09:50.678277  209870 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 09:09:50.678412  209870 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000404613s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000404613s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 09:09:50.678496  209870 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0110 09:09:51.095151  209870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:09:51.109734  209870 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:09:51.109810  209870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:09:51.119457  209870 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:09:51.119476  209870 kubeadm.go:158] found existing configuration files:
	
	I0110 09:09:51.119530  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:09:51.128307  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:09:51.128401  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:09:51.136766  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:09:51.145590  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:09:51.145673  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:09:51.154563  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:09:51.163162  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:09:51.163284  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:09:51.171783  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:09:51.180978  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:09:51.181056  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:09:51.189977  209870 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:09:51.236476  209870 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:09:51.236816  209870 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:09:51.314053  209870 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:09:51.314136  209870 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:09:51.314180  209870 kubeadm.go:319] OS: Linux
	I0110 09:09:51.314241  209870 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:09:51.314296  209870 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:09:51.314361  209870 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:09:51.314415  209870 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:09:51.314467  209870 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:09:51.314534  209870 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:09:51.314605  209870 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:09:51.314672  209870 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:09:51.314725  209870 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:09:51.388648  209870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:09:51.388866  209870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:09:51.389020  209870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:09:51.395580  209870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:09:51.399132  209870 out.go:252]   - Generating certificates and keys ...
	I0110 09:09:51.399241  209870 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:09:51.399316  209870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:09:51.399477  209870 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 09:09:51.399541  209870 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 09:09:51.399611  209870 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 09:09:51.399669  209870 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 09:09:51.399736  209870 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 09:09:51.400069  209870 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 09:09:51.400430  209870 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 09:09:51.400705  209870 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 09:09:51.400928  209870 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 09:09:51.401000  209870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:09:51.651158  209870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:09:51.976821  209870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:09:52.238092  209870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:09:52.382407  209870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:09:52.599476  209870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:09:52.599589  209870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:09:52.599662  209870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:09:52.602908  209870 out.go:252]   - Booting up control plane ...
	I0110 09:09:52.603023  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:09:52.603145  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:09:52.605307  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:09:52.628653  209870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:09:52.628838  209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:09:52.642380  209870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:09:52.642489  209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:09:52.642535  209870 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:09:52.838446  209870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:09:52.838573  209870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:13:52.839263  209870 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001081313s
	I0110 09:13:52.839294  209870 kubeadm.go:319] 
	I0110 09:13:52.839378  209870 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:13:52.839416  209870 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:13:52.839522  209870 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:13:52.839531  209870 kubeadm.go:319] 
	I0110 09:13:52.839635  209870 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:13:52.839666  209870 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:13:52.839697  209870 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:13:52.839701  209870 kubeadm.go:319] 
	I0110 09:13:52.844761  209870 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:13:52.845168  209870 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:13:52.845278  209870 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:13:52.845530  209870 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 09:13:52.845541  209870 kubeadm.go:319] 
	I0110 09:13:52.845606  209870 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:13:52.845665  209870 kubeadm.go:403] duration metric: took 8m6.505307114s to StartCluster
	I0110 09:13:52.845717  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0110 09:13:52.845786  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 09:13:52.876377  209870 cri.go:96] found id: ""
	I0110 09:13:52.876416  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.876425  209870 logs.go:284] No container was found matching "kube-apiserver"
	I0110 09:13:52.876432  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0110 09:13:52.876504  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 09:13:52.902015  209870 cri.go:96] found id: ""
	I0110 09:13:52.902038  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.902047  209870 logs.go:284] No container was found matching "etcd"
	I0110 09:13:52.902055  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0110 09:13:52.902130  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 09:13:52.930106  209870 cri.go:96] found id: ""
	I0110 09:13:52.930126  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.930135  209870 logs.go:284] No container was found matching "coredns"
	I0110 09:13:52.930141  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0110 09:13:52.930200  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 09:13:52.962753  209870 cri.go:96] found id: ""
	I0110 09:13:52.962779  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.962788  209870 logs.go:284] No container was found matching "kube-scheduler"
	I0110 09:13:52.962794  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0110 09:13:52.962852  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 09:13:52.988595  209870 cri.go:96] found id: ""
	I0110 09:13:52.988621  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.988630  209870 logs.go:284] No container was found matching "kube-proxy"
	I0110 09:13:52.988637  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 09:13:52.988699  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 09:13:53.013852  209870 cri.go:96] found id: ""
	I0110 09:13:53.013877  209870 logs.go:282] 0 containers: []
	W0110 09:13:53.013886  209870 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 09:13:53.013893  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0110 09:13:53.013952  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 09:13:53.043720  209870 cri.go:96] found id: ""
	I0110 09:13:53.043743  209870 logs.go:282] 0 containers: []
	W0110 09:13:53.043752  209870 logs.go:284] No container was found matching "kindnet"
	I0110 09:13:53.043763  209870 logs.go:123] Gathering logs for kubelet ...
	I0110 09:13:53.043775  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 09:13:53.104744  209870 logs.go:123] Gathering logs for dmesg ...
	I0110 09:13:53.104780  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 09:13:53.118863  209870 logs.go:123] Gathering logs for describe nodes ...
	I0110 09:13:53.118893  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 09:13:53.212815  209870 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:13:53.185939    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.187077    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.203692    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.206842    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.207578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 09:13:53.185939    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.187077    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.203692    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.206842    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.207578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 09:13:53.212852  209870 logs.go:123] Gathering logs for containerd ...
	I0110 09:13:53.212865  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0110 09:13:53.257937  209870 logs.go:123] Gathering logs for container status ...
	I0110 09:13:53.257973  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0110 09:13:53.288259  209870 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001081313s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 09:13:53.288311  209870 out.go:285] * 
	* 
	W0110 09:13:53.288366  209870 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001081313s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001081313s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:13:53.288383  209870 out.go:285] * 
	* 
	W0110 09:13:53.288643  209870 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:13:53.293744  209870 out.go:203] 
	W0110 09:13:53.295826  209870 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001081313s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001081313s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:13:53.295871  209870 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 09:13:53.295893  209870 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 09:13:53.298998  209870 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-447307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-447307 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-10 09:13:53.695273409 +0000 UTC m=+3214.877660526
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-447307
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-447307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820",
	        "Created": "2026-01-10T09:05:38.387757994Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 210320,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T09:05:38.479909691Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820/hosts",
	        "LogPath": "/var/lib/docker/containers/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820/cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820-json.log",
	        "Name": "/force-systemd-flag-447307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "force-systemd-flag-447307:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-447307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cb73d53b6fd188eb5f2bb3e29a1a85fff1c547f5b1aa977304503792b4c23820",
	                "LowerDir": "/var/lib/docker/overlay2/a59d50cb78c4dbf0338446645257dc4f52e592a8debda4386d1c61f29ae69956-init/diff:/var/lib/docker/overlay2/54d275d5bf894b41181c968ee2ec1be6f053e8252dc2214525d0175b72739adc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a59d50cb78c4dbf0338446645257dc4f52e592a8debda4386d1c61f29ae69956/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a59d50cb78c4dbf0338446645257dc4f52e592a8debda4386d1c61f29ae69956/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a59d50cb78c4dbf0338446645257dc4f52e592a8debda4386d1c61f29ae69956/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-447307",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-447307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-447307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-447307",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-447307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a130cb8bfa8d956c8f568fd80fddbaf234fbab1f01b136430c341c0116b6254",
	            "SandboxKey": "/var/run/docker/netns/7a130cb8bfa8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-447307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:cc:a6:1f:06:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "60d0a65897eac77dac2971ccc13a883191ca99bb752fffc4484eae90b26abc3a",
	                    "EndpointID": "8f1a3fbd028148a59dfafd740a903027257cf1f28d7f93ee13840909ca85b75c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-447307",
	                        "cb73d53b6fd1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-447307 -n force-systemd-flag-447307
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-447307 -n force-systemd-flag-447307: exit status 6 (364.927312ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:13:54.065918  238626 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-447307" does not appear in /home/jenkins/minikube-integration/22427-2439/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-447307 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ cert-options-050298 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-050298       │ jenkins │ v1.37.0 │ 10 Jan 26 09:08 UTC │ 10 Jan 26 09:08 UTC │
	│ ssh     │ -p cert-options-050298 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-050298       │ jenkins │ v1.37.0 │ 10 Jan 26 09:08 UTC │ 10 Jan 26 09:08 UTC │
	│ delete  │ -p cert-options-050298                                                                                                                                                                                                                              │ cert-options-050298       │ jenkins │ v1.37.0 │ 10 Jan 26 09:08 UTC │ 10 Jan 26 09:08 UTC │
	│ start   │ -p old-k8s-version-072756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:08 UTC │ 10 Jan 26 09:09 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-072756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:09 UTC │ 10 Jan 26 09:09 UTC │
	│ stop    │ -p old-k8s-version-072756 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:09 UTC │ 10 Jan 26 09:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-072756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:09 UTC │ 10 Jan 26 09:09 UTC │
	│ start   │ -p old-k8s-version-072756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:09 UTC │ 10 Jan 26 09:10 UTC │
	│ image   │ old-k8s-version-072756 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
	│ pause   │ -p old-k8s-version-072756 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
	│ unpause │ -p old-k8s-version-072756 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
	│ delete  │ -p old-k8s-version-072756                                                                                                                                                                                                                           │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
	│ delete  │ -p old-k8s-version-072756                                                                                                                                                                                                                           │ old-k8s-version-072756    │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:10 UTC │
	│ start   │ -p no-preload-765043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:10 UTC │ 10 Jan 26 09:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-765043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:11 UTC │ 10 Jan 26 09:11 UTC │
	│ stop    │ -p no-preload-765043 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:11 UTC │ 10 Jan 26 09:11 UTC │
	│ addons  │ enable dashboard -p no-preload-765043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:11 UTC │ 10 Jan 26 09:11 UTC │
	│ start   │ -p no-preload-765043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:11 UTC │ 10 Jan 26 09:12 UTC │
	│ image   │ no-preload-765043 image list --format=json                                                                                                                                                                                                          │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
	│ pause   │ -p no-preload-765043 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
	│ unpause │ -p no-preload-765043 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
	│ delete  │ -p no-preload-765043                                                                                                                                                                                                                                │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
	│ delete  │ -p no-preload-765043                                                                                                                                                                                                                                │ no-preload-765043         │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │ 10 Jan 26 09:12 UTC │
	│ start   │ -p embed-certs-070240 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-070240        │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ ssh     │ force-systemd-flag-447307 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-447307 │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:13:00
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:13:00.022185  235056 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:13:00.022384  235056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:13:00.022411  235056 out.go:374] Setting ErrFile to fd 2...
	I0110 09:13:00.022431  235056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:13:00.022741  235056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 09:13:00.023225  235056 out.go:368] Setting JSON to false
	I0110 09:13:00.024087  235056 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3333,"bootTime":1768033047,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 09:13:00.024189  235056 start.go:143] virtualization:  
	I0110 09:13:00.039116  235056 out.go:179] * [embed-certs-070240] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:13:00.044442  235056 notify.go:221] Checking for updates...
	I0110 09:13:00.052758  235056 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:13:00.056787  235056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:13:00.061873  235056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 09:13:00.065162  235056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 09:13:00.072465  235056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:13:00.076807  235056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:13:00.095191  235056 config.go:182] Loaded profile config "force-systemd-flag-447307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 09:13:00.095321  235056 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:13:00.156762  235056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:13:00.156918  235056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:13:00.288095  235056 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:13:00.269580082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:13:00.288276  235056 docker.go:319] overlay module found
	I0110 09:13:00.297812  235056 out.go:179] * Using the docker driver based on user configuration
	I0110 09:13:00.301002  235056 start.go:309] selected driver: docker
	I0110 09:13:00.301028  235056 start.go:928] validating driver "docker" against <nil>
	I0110 09:13:00.301044  235056 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:13:00.301981  235056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:13:00.451243  235056 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:13:00.440097007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:13:00.451646  235056 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:13:00.451955  235056 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 09:13:00.455194  235056 out.go:179] * Using Docker driver with root privileges
	I0110 09:13:00.458322  235056 cni.go:84] Creating CNI manager for ""
	I0110 09:13:00.458435  235056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 09:13:00.458455  235056 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 09:13:00.458553  235056 start.go:353] cluster config:
	{Name:embed-certs-070240 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:13:00.463831  235056 out.go:179] * Starting "embed-certs-070240" primary control-plane node in "embed-certs-070240" cluster
	I0110 09:13:00.466841  235056 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0110 09:13:00.470201  235056 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:13:00.473256  235056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:13:00.473303  235056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:13:00.473317  235056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I0110 09:13:00.473350  235056 cache.go:65] Caching tarball of preloaded images
	I0110 09:13:00.473465  235056 preload.go:251] Found /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 09:13:00.473477  235056 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I0110 09:13:00.473648  235056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/config.json ...
	I0110 09:13:00.473682  235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/config.json: {Name:mkbe327345a9c10462c0cfeae6ecc074773073dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:00.501418  235056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 09:13:00.501448  235056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 09:13:00.501468  235056 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:13:00.501511  235056 start.go:360] acquireMachinesLock for embed-certs-070240: {Name:mkf4458ca775ec5ea65331dd67fbe532fef85672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:13:00.501630  235056 start.go:364] duration metric: took 96.823µs to acquireMachinesLock for "embed-certs-070240"
	I0110 09:13:00.501667  235056 start.go:93] Provisioning new machine with config: &{Name:embed-certs-070240 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0110 09:13:00.501749  235056 start.go:125] createHost starting for "" (driver="docker")
	I0110 09:13:00.505590  235056 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 09:13:00.505890  235056 start.go:159] libmachine.API.Create for "embed-certs-070240" (driver="docker")
	I0110 09:13:00.505937  235056 client.go:173] LocalClient.Create starting
	I0110 09:13:00.506108  235056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem
	I0110 09:13:00.506180  235056 main.go:144] libmachine: Decoding PEM data...
	I0110 09:13:00.506209  235056 main.go:144] libmachine: Parsing certificate...
	I0110 09:13:00.506313  235056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem
	I0110 09:13:00.506332  235056 main.go:144] libmachine: Decoding PEM data...
	I0110 09:13:00.506343  235056 main.go:144] libmachine: Parsing certificate...
	I0110 09:13:00.506731  235056 cli_runner.go:164] Run: docker network inspect embed-certs-070240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 09:13:00.525610  235056 cli_runner.go:211] docker network inspect embed-certs-070240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 09:13:00.525700  235056 network_create.go:284] running [docker network inspect embed-certs-070240] to gather additional debugging logs...
	I0110 09:13:00.525739  235056 cli_runner.go:164] Run: docker network inspect embed-certs-070240
	W0110 09:13:00.545392  235056 cli_runner.go:211] docker network inspect embed-certs-070240 returned with exit code 1
	I0110 09:13:00.545426  235056 network_create.go:287] error running [docker network inspect embed-certs-070240]: docker network inspect embed-certs-070240: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-070240 not found
	I0110 09:13:00.545440  235056 network_create.go:289] output of [docker network inspect embed-certs-070240]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-070240 not found
	
	** /stderr **
	I0110 09:13:00.545546  235056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:13:00.565012  235056 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e01acd8ff726 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:8b:1d:1f:6a:28} reservation:<nil>}
	I0110 09:13:00.565498  235056 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab4f89e52867 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:d7:2a:6d:f4:96} reservation:<nil>}
	I0110 09:13:00.565964  235056 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8b226bd60dd7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:a9:74:b4} reservation:<nil>}
	I0110 09:13:00.566487  235056 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001963f80}
	I0110 09:13:00.566510  235056 network_create.go:124] attempt to create docker network embed-certs-070240 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 09:13:00.566582  235056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-070240 embed-certs-070240
	I0110 09:13:00.629172  235056 network_create.go:108] docker network embed-certs-070240 192.168.76.0/24 created
	I0110 09:13:00.629204  235056 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-070240" container
	I0110 09:13:00.629285  235056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 09:13:00.646345  235056 cli_runner.go:164] Run: docker volume create embed-certs-070240 --label name.minikube.sigs.k8s.io=embed-certs-070240 --label created_by.minikube.sigs.k8s.io=true
	I0110 09:13:00.664895  235056 oci.go:103] Successfully created a docker volume embed-certs-070240
	I0110 09:13:00.664978  235056 cli_runner.go:164] Run: docker run --rm --name embed-certs-070240-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-070240 --entrypoint /usr/bin/test -v embed-certs-070240:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 09:13:01.225727  235056 oci.go:107] Successfully prepared a docker volume embed-certs-070240
	I0110 09:13:01.225805  235056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:13:01.225821  235056 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 09:13:01.225900  235056 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-070240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 09:13:05.167153  235056 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-070240:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.941211081s)
	I0110 09:13:05.167188  235056 kic.go:203] duration metric: took 3.941363528s to extract preloaded images to volume ...
	W0110 09:13:05.167371  235056 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 09:13:05.167489  235056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 09:13:05.267595  235056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-070240 --name embed-certs-070240 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-070240 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-070240 --network embed-certs-070240 --ip 192.168.76.2 --volume embed-certs-070240:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 09:13:05.584752  235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Running}}
	I0110 09:13:05.603862  235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
	I0110 09:13:05.624497  235056 cli_runner.go:164] Run: docker exec embed-certs-070240 stat /var/lib/dpkg/alternatives/iptables
	I0110 09:13:05.678402  235056 oci.go:144] the created container "embed-certs-070240" has a running status.
	I0110 09:13:05.678429  235056 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa...
	I0110 09:13:05.778910  235056 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 09:13:05.796738  235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
	I0110 09:13:05.818623  235056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 09:13:05.818647  235056 kic_runner.go:114] Args: [docker exec --privileged embed-certs-070240 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 09:13:05.873216  235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
	I0110 09:13:05.894406  235056 machine.go:94] provisionDockerMachine start ...
	I0110 09:13:05.894488  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:05.922411  235056 main.go:144] libmachine: Using SSH client type: native
	I0110 09:13:05.922748  235056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I0110 09:13:05.922756  235056 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 09:13:05.925488  235056 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 09:13:09.091223  235056 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-070240
	
	I0110 09:13:09.091289  235056 ubuntu.go:182] provisioning hostname "embed-certs-070240"
	I0110 09:13:09.091399  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:09.109904  235056 main.go:144] libmachine: Using SSH client type: native
	I0110 09:13:09.110242  235056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I0110 09:13:09.110261  235056 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-070240 && echo "embed-certs-070240" | sudo tee /etc/hostname
	I0110 09:13:09.273863  235056 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-070240
	
	I0110 09:13:09.273995  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:09.291523  235056 main.go:144] libmachine: Using SSH client type: native
	I0110 09:13:09.291875  235056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I0110 09:13:09.291902  235056 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-070240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-070240/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-070240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 09:13:09.440244  235056 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 09:13:09.440319  235056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2439/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2439/.minikube}
	I0110 09:13:09.440392  235056 ubuntu.go:190] setting up certificates
	I0110 09:13:09.440423  235056 provision.go:84] configureAuth start
	I0110 09:13:09.440513  235056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-070240
	I0110 09:13:09.461280  235056 provision.go:143] copyHostCerts
	I0110 09:13:09.461356  235056 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem, removing ...
	I0110 09:13:09.461370  235056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
	I0110 09:13:09.461462  235056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem (1123 bytes)
	I0110 09:13:09.461561  235056 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem, removing ...
	I0110 09:13:09.461570  235056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
	I0110 09:13:09.461597  235056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem (1675 bytes)
	I0110 09:13:09.461679  235056 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem, removing ...
	I0110 09:13:09.461690  235056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
	I0110 09:13:09.461715  235056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem (1078 bytes)
	I0110 09:13:09.461767  235056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem org=jenkins.embed-certs-070240 san=[127.0.0.1 192.168.76.2 embed-certs-070240 localhost minikube]
	I0110 09:13:09.509406  235056 provision.go:177] copyRemoteCerts
	I0110 09:13:09.509494  235056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 09:13:09.509543  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:09.526162  235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
	I0110 09:13:09.633243  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 09:13:09.652069  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0110 09:13:09.670949  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 09:13:09.692661  235056 provision.go:87] duration metric: took 252.199956ms to configureAuth
	I0110 09:13:09.692687  235056 ubuntu.go:206] setting minikube options for container-runtime
	I0110 09:13:09.692874  235056 config.go:182] Loaded profile config "embed-certs-070240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 09:13:09.692881  235056 machine.go:97] duration metric: took 3.798457289s to provisionDockerMachine
	I0110 09:13:09.692888  235056 client.go:176] duration metric: took 9.186921227s to LocalClient.Create
	I0110 09:13:09.692902  235056 start.go:167] duration metric: took 9.187022185s to libmachine.API.Create "embed-certs-070240"
	I0110 09:13:09.692909  235056 start.go:293] postStartSetup for "embed-certs-070240" (driver="docker")
	I0110 09:13:09.692918  235056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 09:13:09.692969  235056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 09:13:09.693024  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:09.716823  235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
	I0110 09:13:09.827652  235056 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 09:13:09.831222  235056 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 09:13:09.831247  235056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 09:13:09.831258  235056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/addons for local assets ...
	I0110 09:13:09.831313  235056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/files for local assets ...
	I0110 09:13:09.831410  235056 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> 42572.pem in /etc/ssl/certs
	I0110 09:13:09.831514  235056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 09:13:09.839554  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /etc/ssl/certs/42572.pem (1708 bytes)
	I0110 09:13:09.857921  235056 start.go:296] duration metric: took 164.997987ms for postStartSetup
	I0110 09:13:09.858300  235056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-070240
	I0110 09:13:09.875458  235056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/config.json ...
	I0110 09:13:09.875759  235056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:13:09.875816  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:09.893508  235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
	I0110 09:13:09.996282  235056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 09:13:10.000965  235056 start.go:128] duration metric: took 9.499201127s to createHost
	I0110 09:13:10.001040  235056 start.go:83] releasing machines lock for "embed-certs-070240", held for 9.499392491s
	I0110 09:13:10.001147  235056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-070240
	I0110 09:13:10.018146  235056 ssh_runner.go:195] Run: cat /version.json
	I0110 09:13:10.018208  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:10.018508  235056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 09:13:10.018572  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:10.039826  235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
	I0110 09:13:10.054582  235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
	I0110 09:13:10.250898  235056 ssh_runner.go:195] Run: systemctl --version
	I0110 09:13:10.257525  235056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 09:13:10.262233  235056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 09:13:10.262347  235056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 09:13:10.289787  235056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 09:13:10.289817  235056 start.go:496] detecting cgroup driver to use...
	I0110 09:13:10.289874  235056 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 09:13:10.289957  235056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0110 09:13:10.305610  235056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 09:13:10.318921  235056 docker.go:218] disabling cri-docker service (if available) ...
	I0110 09:13:10.319006  235056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 09:13:10.337214  235056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 09:13:10.356870  235056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 09:13:10.489153  235056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 09:13:10.618231  235056 docker.go:234] disabling docker service ...
	I0110 09:13:10.618348  235056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 09:13:10.640954  235056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 09:13:10.654875  235056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 09:13:10.781345  235056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 09:13:10.907663  235056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 09:13:10.921239  235056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 09:13:10.936750  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 09:13:10.946292  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 09:13:10.955641  235056 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I0110 09:13:10.955732  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0110 09:13:10.965181  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 09:13:10.974777  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 09:13:10.984160  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 09:13:10.993414  235056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 09:13:11.002277  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 09:13:11.011552  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 09:13:11.020731  235056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 09:13:11.032164  235056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 09:13:11.041266  235056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 09:13:11.049538  235056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:13:11.168198  235056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 09:13:11.316082  235056 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I0110 09:13:11.316198  235056 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0110 09:13:11.320219  235056 start.go:574] Will wait 60s for crictl version
	I0110 09:13:11.320342  235056 ssh_runner.go:195] Run: which crictl
	I0110 09:13:11.324162  235056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 09:13:11.348965  235056 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I0110 09:13:11.349038  235056 ssh_runner.go:195] Run: containerd --version
	I0110 09:13:11.370593  235056 ssh_runner.go:195] Run: containerd --version
	I0110 09:13:11.397066  235056 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I0110 09:13:11.399941  235056 cli_runner.go:164] Run: docker network inspect embed-certs-070240 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:13:11.417945  235056 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 09:13:11.422087  235056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:13:11.432146  235056 kubeadm.go:884] updating cluster {Name:embed-certs-070240 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 09:13:11.432265  235056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:13:11.432334  235056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:13:11.457650  235056 containerd.go:635] all images are preloaded for containerd runtime.
	I0110 09:13:11.457678  235056 containerd.go:542] Images already preloaded, skipping extraction
	I0110 09:13:11.457739  235056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:13:11.482672  235056 containerd.go:635] all images are preloaded for containerd runtime.
	I0110 09:13:11.482696  235056 cache_images.go:86] Images are preloaded, skipping loading
	I0110 09:13:11.482705  235056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I0110 09:13:11.482801  235056 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-070240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 09:13:11.482880  235056 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I0110 09:13:11.509165  235056 cni.go:84] Creating CNI manager for ""
	I0110 09:13:11.509191  235056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 09:13:11.509209  235056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 09:13:11.509237  235056 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-070240 NodeName:embed-certs-070240 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 09:13:11.509365  235056 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-070240"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 09:13:11.509436  235056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 09:13:11.517572  235056 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 09:13:11.517660  235056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 09:13:11.525667  235056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0110 09:13:11.539457  235056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 09:13:11.553317  235056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
	I0110 09:13:11.566470  235056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 09:13:11.570101  235056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:13:11.579890  235056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:13:11.696009  235056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:13:11.712950  235056 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240 for IP: 192.168.76.2
	I0110 09:13:11.712973  235056 certs.go:195] generating shared ca certs ...
	I0110 09:13:11.712990  235056 certs.go:227] acquiring lock for ca certs: {Name:mk2efb7c26990a28337b434f05b8d75a57c7c690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:11.713133  235056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key
	I0110 09:13:11.713182  235056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key
	I0110 09:13:11.713196  235056 certs.go:257] generating profile certs ...
	I0110 09:13:11.713250  235056 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.key
	I0110 09:13:11.713266  235056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.crt with IP's: []
	I0110 09:13:12.910441  235056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.crt ...
	I0110 09:13:12.910475  235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.crt: {Name:mk2d9389004d811bee0bcc877ceae3ae60d37010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:12.910677  235056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.key ...
	I0110 09:13:12.910690  235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/client.key: {Name:mk2b1f219c971b2eb1bc9dceb288ef6f57e6e435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:12.910788  235056 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key.91a638bd
	I0110 09:13:12.910803  235056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt.91a638bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 09:13:13.109971  235056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt.91a638bd ...
	I0110 09:13:13.110001  235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt.91a638bd: {Name:mk2a7b42284bd924503f7c2e46ff2701108bcfb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:13.110184  235056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key.91a638bd ...
	I0110 09:13:13.110198  235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key.91a638bd: {Name:mk8800551c00db7229ba8254880432fdd5f179c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:13.110281  235056 certs.go:382] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt.91a638bd -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt
	I0110 09:13:13.110363  235056 certs.go:386] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key.91a638bd -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key
	I0110 09:13:13.110424  235056 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.key
	I0110 09:13:13.110442  235056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.crt with IP's: []
	I0110 09:13:13.540851  235056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.crt ...
	I0110 09:13:13.540891  235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.crt: {Name:mk43c326df22054de9ff9244dc3d225172273ca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:13.541090  235056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.key ...
	I0110 09:13:13.541106  235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.key: {Name:mk870d0c307b657814b43ca961ebe34168a48094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:13.541293  235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem (1338 bytes)
	W0110 09:13:13.541340  235056 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257_empty.pem, impossibly tiny 0 bytes
	I0110 09:13:13.541356  235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 09:13:13.541385  235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem (1078 bytes)
	I0110 09:13:13.541416  235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem (1123 bytes)
	I0110 09:13:13.541445  235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem (1675 bytes)
	I0110 09:13:13.541493  235056 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem (1708 bytes)
	I0110 09:13:13.542101  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 09:13:13.561952  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 09:13:13.580666  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 09:13:13.598633  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 09:13:13.616533  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0110 09:13:13.634498  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 09:13:13.652645  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 09:13:13.670048  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/embed-certs-070240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 09:13:13.687975  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem --> /usr/share/ca-certificates/4257.pem (1338 bytes)
	I0110 09:13:13.705676  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /usr/share/ca-certificates/42572.pem (1708 bytes)
	I0110 09:13:13.723435  235056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 09:13:13.740364  235056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 09:13:13.753153  235056 ssh_runner.go:195] Run: openssl version
	I0110 09:13:13.759208  235056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4257.pem
	I0110 09:13:13.766493  235056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4257.pem /etc/ssl/certs/4257.pem
	I0110 09:13:13.774135  235056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4257.pem
	I0110 09:13:13.777904  235056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:27 /usr/share/ca-certificates/4257.pem
	I0110 09:13:13.777969  235056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4257.pem
	I0110 09:13:13.819373  235056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 09:13:13.827076  235056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4257.pem /etc/ssl/certs/51391683.0
	I0110 09:13:13.834306  235056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42572.pem
	I0110 09:13:13.841704  235056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42572.pem /etc/ssl/certs/42572.pem
	I0110 09:13:13.849677  235056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42572.pem
	I0110 09:13:13.853430  235056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:27 /usr/share/ca-certificates/42572.pem
	I0110 09:13:13.853495  235056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42572.pem
	I0110 09:13:13.893893  235056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 09:13:13.901563  235056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42572.pem /etc/ssl/certs/3ec20f2e.0
	I0110 09:13:13.908947  235056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:13:13.916153  235056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 09:13:13.923284  235056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:13:13.927113  235056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:13:13.927209  235056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:13:13.970696  235056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 09:13:13.978345  235056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 09:13:13.986564  235056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 09:13:13.990904  235056 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 09:13:13.990954  235056 kubeadm.go:401] StartCluster: {Name:embed-certs-070240 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-070240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:13:13.991044  235056 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0110 09:13:13.991109  235056 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:13:14.020938  235056 cri.go:96] found id: ""
	I0110 09:13:14.021077  235056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 09:13:14.030776  235056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 09:13:14.039661  235056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:13:14.039750  235056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:13:14.048317  235056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:13:14.048382  235056 kubeadm.go:158] found existing configuration files:
	
	I0110 09:13:14.048485  235056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:13:14.056355  235056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:13:14.056422  235056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:13:14.063756  235056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:13:14.071469  235056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:13:14.071569  235056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:13:14.079069  235056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:13:14.086589  235056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:13:14.086684  235056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:13:14.094127  235056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:13:14.101697  235056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:13:14.101759  235056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:13:14.109134  235056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:13:14.144534  235056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:13:14.144658  235056 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:13:14.229150  235056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:13:14.229293  235056 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:13:14.229365  235056 kubeadm.go:319] OS: Linux
	I0110 09:13:14.229454  235056 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:13:14.229545  235056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:13:14.229629  235056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:13:14.229712  235056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:13:14.229795  235056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:13:14.229883  235056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:13:14.229974  235056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:13:14.230055  235056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:13:14.230135  235056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:13:14.294765  235056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:13:14.294911  235056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:13:14.295032  235056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:13:14.300901  235056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:13:14.307584  235056 out.go:252]   - Generating certificates and keys ...
	I0110 09:13:14.307738  235056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:13:14.307835  235056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:13:14.454524  235056 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 09:13:14.725137  235056 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 09:13:15.139043  235056 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 09:13:15.202772  235056 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 09:13:15.335767  235056 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 09:13:15.335971  235056 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-070240 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 09:13:15.700029  235056 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 09:13:15.700662  235056 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-070240 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 09:13:15.972725  235056 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 09:13:16.258452  235056 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 09:13:16.307955  235056 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 09:13:16.308275  235056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:13:16.728858  235056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:13:16.889414  235056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:13:16.982623  235056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:13:17.129058  235056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:13:17.372006  235056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:13:17.372647  235056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:13:17.375259  235056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:13:17.378999  235056 out.go:252]   - Booting up control plane ...
	I0110 09:13:17.379102  235056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:13:17.379181  235056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:13:17.380148  235056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:13:17.395884  235056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:13:17.396329  235056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:13:17.404117  235056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:13:17.404559  235056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:13:17.404805  235056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:13:17.533267  235056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:13:17.533418  235056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:13:18.534205  235056 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000968186s
	I0110 09:13:18.537863  235056 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 09:13:18.537965  235056 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 09:13:18.538060  235056 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 09:13:18.538165  235056 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 09:13:20.551006  235056 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.01238558s
	I0110 09:13:22.110493  235056 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.57265444s
	I0110 09:13:24.040141  235056 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502035862s
	I0110 09:13:24.076210  235056 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 09:13:24.095024  235056 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 09:13:24.113736  235056 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 09:13:24.114216  235056 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-070240 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 09:13:24.131459  235056 kubeadm.go:319] [bootstrap-token] Using token: 4u9dpa.2ia3kp786y1ddq78
	I0110 09:13:24.134432  235056 out.go:252]   - Configuring RBAC rules ...
	I0110 09:13:24.134561  235056 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 09:13:24.142619  235056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 09:13:24.156142  235056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 09:13:24.162245  235056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 09:13:24.170776  235056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 09:13:24.176680  235056 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 09:13:24.447601  235056 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 09:13:24.889103  235056 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 09:13:25.451545  235056 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 09:13:25.453376  235056 kubeadm.go:319] 
	I0110 09:13:25.453449  235056 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 09:13:25.453455  235056 kubeadm.go:319] 
	I0110 09:13:25.453555  235056 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 09:13:25.453560  235056 kubeadm.go:319] 
	I0110 09:13:25.453586  235056 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 09:13:25.453645  235056 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 09:13:25.453696  235056 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 09:13:25.453700  235056 kubeadm.go:319] 
	I0110 09:13:25.453756  235056 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 09:13:25.453760  235056 kubeadm.go:319] 
	I0110 09:13:25.453808  235056 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 09:13:25.453811  235056 kubeadm.go:319] 
	I0110 09:13:25.453864  235056 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 09:13:25.453939  235056 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 09:13:25.454009  235056 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 09:13:25.454012  235056 kubeadm.go:319] 
	I0110 09:13:25.454097  235056 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 09:13:25.454173  235056 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 09:13:25.454177  235056 kubeadm.go:319] 
	I0110 09:13:25.454260  235056 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4u9dpa.2ia3kp786y1ddq78 \
	I0110 09:13:25.454363  235056 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d698b7a2ca74d25eb75ad84fc365dd179d4946e37bd477a6d05d4b1a2fdc5a3c \
	I0110 09:13:25.454383  235056 kubeadm.go:319] 	--control-plane 
	I0110 09:13:25.454386  235056 kubeadm.go:319] 
	I0110 09:13:25.454471  235056 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 09:13:25.454475  235056 kubeadm.go:319] 
	I0110 09:13:25.454557  235056 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4u9dpa.2ia3kp786y1ddq78 \
	I0110 09:13:25.454658  235056 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d698b7a2ca74d25eb75ad84fc365dd179d4946e37bd477a6d05d4b1a2fdc5a3c 
	I0110 09:13:25.457212  235056 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:13:25.457640  235056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:13:25.457747  235056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:13:25.457761  235056 cni.go:84] Creating CNI manager for ""
	I0110 09:13:25.457769  235056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 09:13:25.460734  235056 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 09:13:25.463735  235056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 09:13:25.467792  235056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 09:13:25.467809  235056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 09:13:25.488456  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 09:13:25.816798  235056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 09:13:25.816933  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:25.817034  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-070240 minikube.k8s.io/updated_at=2026_01_10T09_13_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=embed-certs-070240 minikube.k8s.io/primary=true
	I0110 09:13:25.980218  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:25.980284  235056 ops.go:34] apiserver oom_adj: -16
	I0110 09:13:26.480394  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:26.980571  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:27.481130  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:27.980353  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:28.480468  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:28.980832  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:29.480360  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:29.980322  235056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:30.143180  235056 kubeadm.go:1114] duration metric: took 4.326293232s to wait for elevateKubeSystemPrivileges
	I0110 09:13:30.143221  235056 kubeadm.go:403] duration metric: took 16.152270365s to StartCluster
	I0110 09:13:30.143239  235056 settings.go:142] acquiring lock: {Name:mkb2ebd5d087e1c54fbd873c70e4f039c6456e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:30.143310  235056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 09:13:30.144396  235056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/kubeconfig: {Name:mk140954996243c884fdf4f6dda6bc952a39b87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:30.144672  235056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0110 09:13:30.144766  235056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 09:13:30.145045  235056 config.go:182] Loaded profile config "embed-certs-070240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 09:13:30.145088  235056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 09:13:30.145166  235056 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-070240"
	I0110 09:13:30.145189  235056 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-070240"
	I0110 09:13:30.145215  235056 host.go:66] Checking if "embed-certs-070240" exists ...
	I0110 09:13:30.145754  235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
	I0110 09:13:30.146329  235056 addons.go:70] Setting default-storageclass=true in profile "embed-certs-070240"
	I0110 09:13:30.146355  235056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-070240"
	I0110 09:13:30.146723  235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
	I0110 09:13:30.149274  235056 out.go:179] * Verifying Kubernetes components...
	I0110 09:13:30.160402  235056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:13:30.192062  235056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 09:13:30.197921  235056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 09:13:30.197954  235056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 09:13:30.198019  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:30.200136  235056 addons.go:239] Setting addon default-storageclass=true in "embed-certs-070240"
	I0110 09:13:30.200191  235056 host.go:66] Checking if "embed-certs-070240" exists ...
	I0110 09:13:30.200635  235056 cli_runner.go:164] Run: docker container inspect embed-certs-070240 --format={{.State.Status}}
	I0110 09:13:30.234511  235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
	I0110 09:13:30.246509  235056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 09:13:30.246536  235056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 09:13:30.246600  235056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-070240
	I0110 09:13:30.281038  235056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/embed-certs-070240/id_rsa Username:docker}
	I0110 09:13:30.468391  235056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 09:13:30.468496  235056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:13:30.500204  235056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 09:13:30.562336  235056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 09:13:30.901888  235056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-070240" to be "Ready" ...
	I0110 09:13:30.902222  235056 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 09:13:31.392928  235056 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0110 09:13:31.395709  235056 addons.go:530] duration metric: took 1.250611329s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0110 09:13:31.405870  235056 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-070240" context rescaled to 1 replicas
	W0110 09:13:32.905078  235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
	W0110 09:13:35.404431  235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
	W0110 09:13:37.406693  235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
	W0110 09:13:39.904753  235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
	W0110 09:13:41.904979  235056 node_ready.go:57] node "embed-certs-070240" has "Ready":"False" status (will retry)
	I0110 09:13:43.404607  235056 node_ready.go:49] node "embed-certs-070240" is "Ready"
	I0110 09:13:43.404639  235056 node_ready.go:38] duration metric: took 12.502715303s for node "embed-certs-070240" to be "Ready" ...
	I0110 09:13:43.404653  235056 api_server.go:52] waiting for apiserver process to appear ...
	I0110 09:13:43.404724  235056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 09:13:43.417269  235056 api_server.go:72] duration metric: took 13.272558352s to wait for apiserver process to appear ...
	I0110 09:13:43.417294  235056 api_server.go:88] waiting for apiserver healthz status ...
	I0110 09:13:43.417312  235056 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 09:13:43.425684  235056 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 09:13:43.426802  235056 api_server.go:141] control plane version: v1.35.0
	I0110 09:13:43.426827  235056 api_server.go:131] duration metric: took 9.525816ms to wait for apiserver health ...
	I0110 09:13:43.426836  235056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 09:13:43.435391  235056 system_pods.go:59] 8 kube-system pods found
	I0110 09:13:43.435424  235056 system_pods.go:61] "coredns-7d764666f9-6tr7h" [a28851eb-79e5-49a3-a177-bccbb53c272e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 09:13:43.435432  235056 system_pods.go:61] "etcd-embed-certs-070240" [18d8b4d6-5395-4dea-8f5d-08ffa960a7ab] Running
	I0110 09:13:43.435448  235056 system_pods.go:61] "kindnet-ns57l" [2f3cdacc-f2bb-49d1-9424-5fd62081ecaf] Running
	I0110 09:13:43.435453  235056 system_pods.go:61] "kube-apiserver-embed-certs-070240" [e0a3faa8-1472-4f68-a1fd-84f22a19c4d8] Running
	I0110 09:13:43.435462  235056 system_pods.go:61] "kube-controller-manager-embed-certs-070240" [1c9c96b6-d52e-48de-ba80-e464a7153b22] Running
	I0110 09:13:43.435467  235056 system_pods.go:61] "kube-proxy-txqld" [44053c1b-39fd-47b0-b88d-b01dc9ec9935] Running
	I0110 09:13:43.435473  235056 system_pods.go:61] "kube-scheduler-embed-certs-070240" [f7cafcd1-5017-46d4-803b-2ab15558f532] Running
	I0110 09:13:43.435479  235056 system_pods.go:61] "storage-provisioner" [c7153e29-69ef-4001-9a0a-c6b18ed7c134] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 09:13:43.435491  235056 system_pods.go:74] duration metric: took 8.648864ms to wait for pod list to return data ...
	I0110 09:13:43.435500  235056 default_sa.go:34] waiting for default service account to be created ...
	I0110 09:13:43.439320  235056 default_sa.go:45] found service account: "default"
	I0110 09:13:43.439343  235056 default_sa.go:55] duration metric: took 3.832195ms for default service account to be created ...
	I0110 09:13:43.439393  235056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 09:13:43.443923  235056 system_pods.go:86] 8 kube-system pods found
	I0110 09:13:43.443997  235056 system_pods.go:89] "coredns-7d764666f9-6tr7h" [a28851eb-79e5-49a3-a177-bccbb53c272e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 09:13:43.444019  235056 system_pods.go:89] "etcd-embed-certs-070240" [18d8b4d6-5395-4dea-8f5d-08ffa960a7ab] Running
	I0110 09:13:43.444042  235056 system_pods.go:89] "kindnet-ns57l" [2f3cdacc-f2bb-49d1-9424-5fd62081ecaf] Running
	I0110 09:13:43.444076  235056 system_pods.go:89] "kube-apiserver-embed-certs-070240" [e0a3faa8-1472-4f68-a1fd-84f22a19c4d8] Running
	I0110 09:13:43.444103  235056 system_pods.go:89] "kube-controller-manager-embed-certs-070240" [1c9c96b6-d52e-48de-ba80-e464a7153b22] Running
	I0110 09:13:43.444126  235056 system_pods.go:89] "kube-proxy-txqld" [44053c1b-39fd-47b0-b88d-b01dc9ec9935] Running
	I0110 09:13:43.444147  235056 system_pods.go:89] "kube-scheduler-embed-certs-070240" [f7cafcd1-5017-46d4-803b-2ab15558f532] Running
	I0110 09:13:43.444180  235056 system_pods.go:89] "storage-provisioner" [c7153e29-69ef-4001-9a0a-c6b18ed7c134] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 09:13:43.444226  235056 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0110 09:13:43.729092  235056 system_pods.go:86] 8 kube-system pods found
	I0110 09:13:43.729180  235056 system_pods.go:89] "coredns-7d764666f9-6tr7h" [a28851eb-79e5-49a3-a177-bccbb53c272e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 09:13:43.729201  235056 system_pods.go:89] "etcd-embed-certs-070240" [18d8b4d6-5395-4dea-8f5d-08ffa960a7ab] Running
	I0110 09:13:43.729241  235056 system_pods.go:89] "kindnet-ns57l" [2f3cdacc-f2bb-49d1-9424-5fd62081ecaf] Running
	I0110 09:13:43.729265  235056 system_pods.go:89] "kube-apiserver-embed-certs-070240" [e0a3faa8-1472-4f68-a1fd-84f22a19c4d8] Running
	I0110 09:13:43.729292  235056 system_pods.go:89] "kube-controller-manager-embed-certs-070240" [1c9c96b6-d52e-48de-ba80-e464a7153b22] Running
	I0110 09:13:43.729311  235056 system_pods.go:89] "kube-proxy-txqld" [44053c1b-39fd-47b0-b88d-b01dc9ec9935] Running
	I0110 09:13:43.729342  235056 system_pods.go:89] "kube-scheduler-embed-certs-070240" [f7cafcd1-5017-46d4-803b-2ab15558f532] Running
	I0110 09:13:43.729368  235056 system_pods.go:89] "storage-provisioner" [c7153e29-69ef-4001-9a0a-c6b18ed7c134] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 09:13:44.012729  235056 system_pods.go:86] 8 kube-system pods found
	I0110 09:13:44.012809  235056 system_pods.go:89] "coredns-7d764666f9-6tr7h" [a28851eb-79e5-49a3-a177-bccbb53c272e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 09:13:44.012836  235056 system_pods.go:89] "etcd-embed-certs-070240" [18d8b4d6-5395-4dea-8f5d-08ffa960a7ab] Running
	I0110 09:13:44.012876  235056 system_pods.go:89] "kindnet-ns57l" [2f3cdacc-f2bb-49d1-9424-5fd62081ecaf] Running
	I0110 09:13:44.012898  235056 system_pods.go:89] "kube-apiserver-embed-certs-070240" [e0a3faa8-1472-4f68-a1fd-84f22a19c4d8] Running
	I0110 09:13:44.012917  235056 system_pods.go:89] "kube-controller-manager-embed-certs-070240" [1c9c96b6-d52e-48de-ba80-e464a7153b22] Running
	I0110 09:13:44.012936  235056 system_pods.go:89] "kube-proxy-txqld" [44053c1b-39fd-47b0-b88d-b01dc9ec9935] Running
	I0110 09:13:44.012956  235056 system_pods.go:89] "kube-scheduler-embed-certs-070240" [f7cafcd1-5017-46d4-803b-2ab15558f532] Running
	I0110 09:13:44.012988  235056 system_pods.go:89] "storage-provisioner" [c7153e29-69ef-4001-9a0a-c6b18ed7c134] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 09:13:44.013017  235056 system_pods.go:126] duration metric: took 573.615542ms to wait for k8s-apps to be running ...
	I0110 09:13:44.013042  235056 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 09:13:44.013128  235056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:13:44.050285  235056 system_svc.go:56] duration metric: took 37.233788ms WaitForService to wait for kubelet
	I0110 09:13:44.050407  235056 kubeadm.go:587] duration metric: took 13.90568046s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 09:13:44.050456  235056 node_conditions.go:102] verifying NodePressure condition ...
	I0110 09:13:44.069413  235056 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 09:13:44.069485  235056 node_conditions.go:123] node cpu capacity is 2
	I0110 09:13:44.069514  235056 node_conditions.go:105] duration metric: took 19.032268ms to run NodePressure ...
	I0110 09:13:44.069543  235056 start.go:242] waiting for startup goroutines ...
	I0110 09:13:44.069575  235056 start.go:247] waiting for cluster config update ...
	I0110 09:13:44.069603  235056 start.go:256] writing updated cluster config ...
	I0110 09:13:44.069900  235056 ssh_runner.go:195] Run: rm -f paused
	I0110 09:13:44.073982  235056 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 09:13:44.096622  235056 pod_ready.go:83] waiting for pod "coredns-7d764666f9-6tr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:44.103780  235056 pod_ready.go:94] pod "coredns-7d764666f9-6tr7h" is "Ready"
	I0110 09:13:44.103856  235056 pod_ready.go:86] duration metric: took 7.164071ms for pod "coredns-7d764666f9-6tr7h" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:44.106863  235056 pod_ready.go:83] waiting for pod "etcd-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:44.112867  235056 pod_ready.go:94] pod "etcd-embed-certs-070240" is "Ready"
	I0110 09:13:44.112940  235056 pod_ready.go:86] duration metric: took 6.004422ms for pod "etcd-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:44.115833  235056 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:44.121113  235056 pod_ready.go:94] pod "kube-apiserver-embed-certs-070240" is "Ready"
	I0110 09:13:44.121182  235056 pod_ready.go:86] duration metric: took 5.28827ms for pod "kube-apiserver-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:44.124152  235056 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:44.478177  235056 pod_ready.go:94] pod "kube-controller-manager-embed-certs-070240" is "Ready"
	I0110 09:13:44.478204  235056 pod_ready.go:86] duration metric: took 353.989503ms for pod "kube-controller-manager-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:44.678525  235056 pod_ready.go:83] waiting for pod "kube-proxy-txqld" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:45.092574  235056 pod_ready.go:94] pod "kube-proxy-txqld" is "Ready"
	I0110 09:13:45.092609  235056 pod_ready.go:86] duration metric: took 414.058387ms for pod "kube-proxy-txqld" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:45.280131  235056 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:45.679759  235056 pod_ready.go:94] pod "kube-scheduler-embed-certs-070240" is "Ready"
	I0110 09:13:45.679840  235056 pod_ready.go:86] duration metric: took 399.603154ms for pod "kube-scheduler-embed-certs-070240" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:13:45.679870  235056 pod_ready.go:40] duration metric: took 1.605787632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 09:13:45.757621  235056 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 09:13:45.760895  235056 out.go:203] 
	W0110 09:13:45.763879  235056 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 09:13:45.766911  235056 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 09:13:45.770720  235056 out.go:179] * Done! kubectl is now configured to use "embed-certs-070240" cluster and "default" namespace by default
	I0110 09:13:52.839263  209870 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001081313s
	I0110 09:13:52.839294  209870 kubeadm.go:319] 
	I0110 09:13:52.839378  209870 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:13:52.839416  209870 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:13:52.839522  209870 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:13:52.839531  209870 kubeadm.go:319] 
	I0110 09:13:52.839635  209870 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:13:52.839666  209870 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:13:52.839697  209870 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:13:52.839701  209870 kubeadm.go:319] 
	I0110 09:13:52.844761  209870 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:13:52.845168  209870 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:13:52.845278  209870 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:13:52.845530  209870 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 09:13:52.845541  209870 kubeadm.go:319] 
	I0110 09:13:52.845606  209870 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:13:52.845665  209870 kubeadm.go:403] duration metric: took 8m6.505307114s to StartCluster
	I0110 09:13:52.845717  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0110 09:13:52.845786  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 09:13:52.876377  209870 cri.go:96] found id: ""
	I0110 09:13:52.876416  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.876425  209870 logs.go:284] No container was found matching "kube-apiserver"
	I0110 09:13:52.876432  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0110 09:13:52.876504  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 09:13:52.902015  209870 cri.go:96] found id: ""
	I0110 09:13:52.902038  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.902047  209870 logs.go:284] No container was found matching "etcd"
	I0110 09:13:52.902055  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0110 09:13:52.902130  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 09:13:52.930106  209870 cri.go:96] found id: ""
	I0110 09:13:52.930126  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.930135  209870 logs.go:284] No container was found matching "coredns"
	I0110 09:13:52.930141  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0110 09:13:52.930200  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 09:13:52.962753  209870 cri.go:96] found id: ""
	I0110 09:13:52.962779  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.962788  209870 logs.go:284] No container was found matching "kube-scheduler"
	I0110 09:13:52.962794  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0110 09:13:52.962852  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 09:13:52.988595  209870 cri.go:96] found id: ""
	I0110 09:13:52.988621  209870 logs.go:282] 0 containers: []
	W0110 09:13:52.988630  209870 logs.go:284] No container was found matching "kube-proxy"
	I0110 09:13:52.988637  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 09:13:52.988699  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 09:13:53.013852  209870 cri.go:96] found id: ""
	I0110 09:13:53.013877  209870 logs.go:282] 0 containers: []
	W0110 09:13:53.013886  209870 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 09:13:53.013893  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0110 09:13:53.013952  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 09:13:53.043720  209870 cri.go:96] found id: ""
	I0110 09:13:53.043743  209870 logs.go:282] 0 containers: []
	W0110 09:13:53.043752  209870 logs.go:284] No container was found matching "kindnet"
	I0110 09:13:53.043763  209870 logs.go:123] Gathering logs for kubelet ...
	I0110 09:13:53.043775  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 09:13:53.104744  209870 logs.go:123] Gathering logs for dmesg ...
	I0110 09:13:53.104780  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 09:13:53.118863  209870 logs.go:123] Gathering logs for describe nodes ...
	I0110 09:13:53.118893  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 09:13:53.212815  209870 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:13:53.185939    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.187077    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.203692    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.206842    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.207578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 09:13:53.185939    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.187077    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.203692    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.206842    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:53.207578    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 09:13:53.212852  209870 logs.go:123] Gathering logs for containerd ...
	I0110 09:13:53.212865  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0110 09:13:53.257937  209870 logs.go:123] Gathering logs for container status ...
	I0110 09:13:53.257973  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0110 09:13:53.288259  209870 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001081313s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 09:13:53.288311  209870 out.go:285] * 
	W0110 09:13:53.288366  209870 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001081313s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:13:53.288383  209870 out.go:285] * 
	W0110 09:13:53.288643  209870 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:13:53.293744  209870 out.go:203] 
	W0110 09:13:53.295826  209870 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001081313s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:13:53.295871  209870 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 09:13:53.295893  209870 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 09:13:53.298998  209870 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762674350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762686831Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762726790Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762745942Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762759997Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762771829Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762782242Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762793672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762806652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.762837069Z" level=info msg="Connect containerd service"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.763117871Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.763699232Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.782146927Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.782208787Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.782239630Z" level=info msg="Start subscribing containerd event"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.782290716Z" level=info msg="Start recovering state"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821359137Z" level=info msg="Start event monitor"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821550846Z" level=info msg="Start cni network conf syncer for default"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821617333Z" level=info msg="Start streaming server"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821684674Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821794764Z" level=info msg="runtime interface starting up..."
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821848829Z" level=info msg="starting plugins..."
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.821911977Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 09:05:44 force-systemd-flag-447307 systemd[1]: Started containerd.service - containerd container runtime.
	Jan 10 09:05:44 force-systemd-flag-447307 containerd[760]: time="2026-01-10T09:05:44.824569251Z" level=info msg="containerd successfully booted in 0.083945s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:13:54.680425    4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:54.681236    4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:54.682980    4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:54.683635    4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:13:54.685255    4914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan10 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015531] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.518244] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036376] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.856143] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.640312] kauditd_printk_skb: 39 callbacks suppressed
	[Jan10 08:20] hrtimer: interrupt took 13698190 ns
	
	
	==> kernel <==
	 09:13:54 up 56 min,  0 user,  load average: 2.08, 1.99, 1.91
	Linux force-systemd-flag-447307 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 09:13:51 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:13:52 force-systemd-flag-447307 kubelet[4713]: E0110 09:13:52.474338    4713 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:13:52 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:13:53 force-systemd-flag-447307 kubelet[4789]: E0110 09:13:53.252372    4789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:13:53 force-systemd-flag-447307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:13:54 force-systemd-flag-447307 kubelet[4823]: E0110 09:13:54.007587    4823 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:13:54 force-systemd-flag-447307 kubelet[4919]: E0110 09:13:54.753275    4919 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:13:54 force-systemd-flag-447307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-447307 -n force-systemd-flag-447307
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-447307 -n force-systemd-flag-447307: exit status 6 (376.559027ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:13:55.188880  238846 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-447307" does not appear in /home/jenkins/minikube-integration/22427-2439/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-447307" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-447307" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-447307
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-447307: (2.132879741s)
--- FAIL: TestForceSystemdFlag (503.98s)

                                                
                                    
x
+
TestForceSystemdEnv (506.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-562333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0110 08:59:14.207250    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-562333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m22.276118369s)

                                                
                                                
-- stdout --
	* [force-systemd-env-562333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-562333" primary control-plane node in "force-systemd-env-562333" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:59:09.588577  190061 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:59:09.588729  190061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:59:09.588736  190061 out.go:374] Setting ErrFile to fd 2...
	I0110 08:59:09.588741  190061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:59:09.589000  190061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:59:09.589388  190061 out.go:368] Setting JSON to false
	I0110 08:59:09.590370  190061 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2503,"bootTime":1768033047,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 08:59:09.590439  190061 start.go:143] virtualization:  
	I0110 08:59:09.595010  190061 out.go:179] * [force-systemd-env-562333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 08:59:09.599215  190061 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:59:09.599293  190061 notify.go:221] Checking for updates...
	I0110 08:59:09.606455  190061 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:59:09.610022  190061 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 08:59:09.613132  190061 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 08:59:09.616188  190061 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 08:59:09.619335  190061 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I0110 08:59:09.622994  190061 config.go:182] Loaded profile config "test-preload-953196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:59:09.623154  190061 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:59:09.676148  190061 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:59:09.676329  190061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:59:09.790046  190061 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2026-01-10 08:59:09.779268133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:59:09.790143  190061 docker.go:319] overlay module found
	I0110 08:59:09.793337  190061 out.go:179] * Using the docker driver based on user configuration
	I0110 08:59:09.796205  190061 start.go:309] selected driver: docker
	I0110 08:59:09.796226  190061 start.go:928] validating driver "docker" against <nil>
	I0110 08:59:09.796248  190061 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:59:09.796932  190061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:59:09.918618  190061 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2026-01-10 08:59:09.907318691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:59:09.918788  190061 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:59:09.919010  190061 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:59:09.922064  190061 out.go:179] * Using Docker driver with root privileges
	I0110 08:59:09.924996  190061 cni.go:84] Creating CNI manager for ""
	I0110 08:59:09.925074  190061 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 08:59:09.925090  190061 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:59:09.925186  190061 start.go:353] cluster config:
	{Name:force-systemd-env-562333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-562333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:09.928504  190061 out.go:179] * Starting "force-systemd-env-562333" primary control-plane node in "force-systemd-env-562333" cluster
	I0110 08:59:09.931377  190061 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0110 08:59:09.934389  190061 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:59:09.937319  190061 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 08:59:09.937370  190061 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I0110 08:59:09.937402  190061 cache.go:65] Caching tarball of preloaded images
	I0110 08:59:09.937485  190061 preload.go:251] Found /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 08:59:09.937500  190061 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I0110 08:59:09.937595  190061 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/config.json ...
	I0110 08:59:09.937622  190061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/config.json: {Name:mk19aba03bdb8b404b18c2e1b2bca02316b3ad08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:09.937782  190061 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:59:09.960474  190061 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:59:09.960502  190061 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:59:09.960517  190061 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:59:09.960552  190061 start.go:360] acquireMachinesLock for force-systemd-env-562333: {Name:mk5776734393fa0edf34b4ff9d1c8424aa29e6b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:59:09.960662  190061 start.go:364] duration metric: took 89.093µs to acquireMachinesLock for "force-systemd-env-562333"
	I0110 08:59:09.960691  190061 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-562333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-562333 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0110 08:59:09.960763  190061 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:59:09.964829  190061 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 08:59:09.965061  190061 start.go:159] libmachine.API.Create for "force-systemd-env-562333" (driver="docker")
	I0110 08:59:09.965097  190061 client.go:173] LocalClient.Create starting
	I0110 08:59:09.965168  190061 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem
	I0110 08:59:09.965206  190061 main.go:144] libmachine: Decoding PEM data...
	I0110 08:59:09.965225  190061 main.go:144] libmachine: Parsing certificate...
	I0110 08:59:09.965273  190061 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem
	I0110 08:59:09.965295  190061 main.go:144] libmachine: Decoding PEM data...
	I0110 08:59:09.965315  190061 main.go:144] libmachine: Parsing certificate...
	I0110 08:59:09.965710  190061 cli_runner.go:164] Run: docker network inspect force-systemd-env-562333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 08:59:09.985406  190061 cli_runner.go:211] docker network inspect force-systemd-env-562333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 08:59:09.985486  190061 network_create.go:284] running [docker network inspect force-systemd-env-562333] to gather additional debugging logs...
	I0110 08:59:09.985508  190061 cli_runner.go:164] Run: docker network inspect force-systemd-env-562333
	W0110 08:59:10.010261  190061 cli_runner.go:211] docker network inspect force-systemd-env-562333 returned with exit code 1
	I0110 08:59:10.010291  190061 network_create.go:287] error running [docker network inspect force-systemd-env-562333]: docker network inspect force-systemd-env-562333: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-562333 not found
	I0110 08:59:10.010304  190061 network_create.go:289] output of [docker network inspect force-systemd-env-562333]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-562333 not found
	
	** /stderr **
	I0110 08:59:10.010391  190061 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:10.031831  190061 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e01acd8ff726 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:8b:1d:1f:6a:28} reservation:<nil>}
	I0110 08:59:10.032235  190061 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab4f89e52867 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:d7:2a:6d:f4:96} reservation:<nil>}
	I0110 08:59:10.032628  190061 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8b226bd60dd7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:a9:74:b4} reservation:<nil>}
	I0110 08:59:10.033111  190061 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cfc10}
	I0110 08:59:10.033139  190061 network_create.go:124] attempt to create docker network force-systemd-env-562333 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 08:59:10.033219  190061 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-562333 force-systemd-env-562333
	I0110 08:59:10.097531  190061 network_create.go:108] docker network force-systemd-env-562333 192.168.76.0/24 created
	I0110 08:59:10.097569  190061 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-562333" container
	I0110 08:59:10.097642  190061 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 08:59:10.118961  190061 cli_runner.go:164] Run: docker volume create force-systemd-env-562333 --label name.minikube.sigs.k8s.io=force-systemd-env-562333 --label created_by.minikube.sigs.k8s.io=true
	I0110 08:59:10.149048  190061 oci.go:103] Successfully created a docker volume force-systemd-env-562333
	I0110 08:59:10.149132  190061 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-562333-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-562333 --entrypoint /usr/bin/test -v force-systemd-env-562333:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 08:59:10.852162  190061 oci.go:107] Successfully prepared a docker volume force-systemd-env-562333
	I0110 08:59:10.852227  190061 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 08:59:10.852237  190061 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 08:59:10.852313  190061 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-562333:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 08:59:15.445648  190061 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-562333:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.593289735s)
	I0110 08:59:15.445679  190061 kic.go:203] duration metric: took 4.59343793s to extract preloaded images to volume ...
	W0110 08:59:15.445816  190061 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 08:59:15.445932  190061 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 08:59:15.601575  190061 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-562333 --name force-systemd-env-562333 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-562333 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-562333 --network force-systemd-env-562333 --ip 192.168.76.2 --volume force-systemd-env-562333:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 08:59:16.072543  190061 cli_runner.go:164] Run: docker container inspect force-systemd-env-562333 --format={{.State.Running}}
	I0110 08:59:16.097837  190061 cli_runner.go:164] Run: docker container inspect force-systemd-env-562333 --format={{.State.Status}}
	I0110 08:59:16.118837  190061 cli_runner.go:164] Run: docker exec force-systemd-env-562333 stat /var/lib/dpkg/alternatives/iptables
	I0110 08:59:16.182725  190061 oci.go:144] the created container "force-systemd-env-562333" has a running status.
	I0110 08:59:16.182753  190061 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-env-562333/id_rsa...
	I0110 08:59:16.425771  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-env-562333/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 08:59:16.425860  190061 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-env-562333/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 08:59:16.450085  190061 cli_runner.go:164] Run: docker container inspect force-systemd-env-562333 --format={{.State.Status}}
	I0110 08:59:16.477114  190061 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 08:59:16.477134  190061 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-562333 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 08:59:16.547121  190061 cli_runner.go:164] Run: docker container inspect force-systemd-env-562333 --format={{.State.Status}}
	I0110 08:59:16.579598  190061 machine.go:94] provisionDockerMachine start ...
	I0110 08:59:16.579698  190061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-562333
	I0110 08:59:16.607621  190061 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:16.608054  190061 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33014 <nil> <nil>}
	I0110 08:59:16.608075  190061 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:59:16.608835  190061 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 08:59:19.767037  190061 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-562333
	
	I0110 08:59:19.767060  190061 ubuntu.go:182] provisioning hostname "force-systemd-env-562333"
	I0110 08:59:19.767132  190061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-562333
	I0110 08:59:19.791543  190061 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:19.791913  190061 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33014 <nil> <nil>}
	I0110 08:59:19.791932  190061 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-562333 && echo "force-systemd-env-562333" | sudo tee /etc/hostname
	I0110 08:59:19.957796  190061 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-562333
	
	I0110 08:59:19.957921  190061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-562333
	I0110 08:59:19.976290  190061 main.go:144] libmachine: Using SSH client type: native
	I0110 08:59:19.976612  190061 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33014 <nil> <nil>}
	I0110 08:59:19.976634  190061 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-562333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-562333/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-562333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:59:20.140328  190061 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:59:20.140363  190061 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2439/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2439/.minikube}
	I0110 08:59:20.140383  190061 ubuntu.go:190] setting up certificates
	I0110 08:59:20.140392  190061 provision.go:84] configureAuth start
	I0110 08:59:20.140481  190061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-562333
	I0110 08:59:20.163627  190061 provision.go:143] copyHostCerts
	I0110 08:59:20.163669  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
	I0110 08:59:20.163700  190061 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem, removing ...
	I0110 08:59:20.163707  190061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
	I0110 08:59:20.163800  190061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem (1078 bytes)
	I0110 08:59:20.163916  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
	I0110 08:59:20.163937  190061 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem, removing ...
	I0110 08:59:20.163943  190061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
	I0110 08:59:20.163970  190061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem (1123 bytes)
	I0110 08:59:20.164021  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
	I0110 08:59:20.164041  190061 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem, removing ...
	I0110 08:59:20.164046  190061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
	I0110 08:59:20.164071  190061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem (1675 bytes)
	I0110 08:59:20.164136  190061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-562333 san=[127.0.0.1 192.168.76.2 force-systemd-env-562333 localhost minikube]
	I0110 08:59:20.282981  190061 provision.go:177] copyRemoteCerts
	I0110 08:59:20.283055  190061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:59:20.283105  190061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-562333
	I0110 08:59:20.302570  190061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33014 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-env-562333/id_rsa Username:docker}
	I0110 08:59:20.408849  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 08:59:20.408952  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:59:20.426508  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 08:59:20.426565  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0110 08:59:20.444818  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 08:59:20.444889  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 08:59:20.463034  190061 provision.go:87] duration metric: took 322.608321ms to configureAuth
	I0110 08:59:20.463103  190061 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:59:20.463292  190061 config.go:182] Loaded profile config "force-systemd-env-562333": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:59:20.463308  190061 machine.go:97] duration metric: took 3.883692296s to provisionDockerMachine
	I0110 08:59:20.463316  190061 client.go:176] duration metric: took 10.49820855s to LocalClient.Create
	I0110 08:59:20.463344  190061 start.go:167] duration metric: took 10.498283144s to libmachine.API.Create "force-systemd-env-562333"
	I0110 08:59:20.463407  190061 start.go:293] postStartSetup for "force-systemd-env-562333" (driver="docker")
	I0110 08:59:20.463416  190061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:59:20.463479  190061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:59:20.463529  190061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-562333
	I0110 08:59:20.482131  190061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33014 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-env-562333/id_rsa Username:docker}
	I0110 08:59:20.587801  190061 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:59:20.593843  190061 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:59:20.593879  190061 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:59:20.593890  190061 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/addons for local assets ...
	I0110 08:59:20.593943  190061 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/files for local assets ...
	I0110 08:59:20.594026  190061 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> 42572.pem in /etc/ssl/certs
	I0110 08:59:20.594036  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> /etc/ssl/certs/42572.pem
	I0110 08:59:20.594131  190061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:59:20.602513  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /etc/ssl/certs/42572.pem (1708 bytes)
	I0110 08:59:20.622962  190061 start.go:296] duration metric: took 159.540893ms for postStartSetup
	I0110 08:59:20.623344  190061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-562333
	I0110 08:59:20.640665  190061 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/config.json ...
	I0110 08:59:20.641005  190061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:59:20.641062  190061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-562333
	I0110 08:59:20.657715  190061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33014 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-env-562333/id_rsa Username:docker}
	I0110 08:59:20.760258  190061 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:59:20.764719  190061 start.go:128] duration metric: took 10.803942685s to createHost
	I0110 08:59:20.764741  190061 start.go:83] releasing machines lock for "force-systemd-env-562333", held for 10.804066181s
	I0110 08:59:20.764821  190061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-562333
	I0110 08:59:20.782003  190061 ssh_runner.go:195] Run: cat /version.json
	I0110 08:59:20.782059  190061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-562333
	I0110 08:59:20.782316  190061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:59:20.782367  190061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-562333
	I0110 08:59:20.804893  190061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33014 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-env-562333/id_rsa Username:docker}
	I0110 08:59:20.807304  190061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33014 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-env-562333/id_rsa Username:docker}
	I0110 08:59:20.912941  190061 ssh_runner.go:195] Run: systemctl --version
	I0110 08:59:21.020464  190061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:59:21.026107  190061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:59:21.026193  190061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:59:21.055578  190061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 08:59:21.055602  190061 start.go:496] detecting cgroup driver to use...
	I0110 08:59:21.055621  190061 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 08:59:21.055722  190061 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0110 08:59:21.070837  190061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 08:59:21.083724  190061 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:59:21.083846  190061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:59:21.102580  190061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:59:21.122133  190061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:59:21.249448  190061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:59:21.382485  190061 docker.go:234] disabling docker service ...
	I0110 08:59:21.382588  190061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:59:21.404994  190061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:59:21.418928  190061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:59:21.533184  190061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:59:21.662142  190061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:59:21.676495  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:59:21.691873  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 08:59:21.701800  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 08:59:21.711195  190061 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 08:59:21.711343  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 08:59:21.720759  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:21.731187  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 08:59:21.740840  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 08:59:21.751229  190061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:59:21.760469  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 08:59:21.769541  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 08:59:21.779143  190061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 08:59:21.789084  190061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:59:21.797017  190061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:59:21.811940  190061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:21.929012  190061 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 08:59:22.051124  190061 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I0110 08:59:22.051232  190061 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0110 08:59:22.057273  190061 start.go:574] Will wait 60s for crictl version
	I0110 08:59:22.057373  190061 ssh_runner.go:195] Run: which crictl
	I0110 08:59:22.061210  190061 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:59:22.089079  190061 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I0110 08:59:22.089182  190061 ssh_runner.go:195] Run: containerd --version
	I0110 08:59:22.113418  190061 ssh_runner.go:195] Run: containerd --version
	I0110 08:59:22.142851  190061 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I0110 08:59:22.145943  190061 cli_runner.go:164] Run: docker network inspect force-systemd-env-562333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:59:22.162645  190061 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 08:59:22.166561  190061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:22.176494  190061 kubeadm.go:884] updating cluster {Name:force-systemd-env-562333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-562333 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:59:22.176608  190061 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 08:59:22.176675  190061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:59:22.205863  190061 containerd.go:635] all images are preloaded for containerd runtime.
	I0110 08:59:22.205890  190061 containerd.go:542] Images already preloaded, skipping extraction
	I0110 08:59:22.205948  190061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:59:22.230259  190061 containerd.go:635] all images are preloaded for containerd runtime.
	I0110 08:59:22.230283  190061 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:59:22.230291  190061 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I0110 08:59:22.230384  190061 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-562333 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-562333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:59:22.230457  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I0110 08:59:22.256911  190061 cni.go:84] Creating CNI manager for ""
	I0110 08:59:22.256933  190061 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 08:59:22.256955  190061 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:59:22.256977  190061 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-562333 NodeName:force-systemd-env-562333 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:59:22.257099  190061 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-562333"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:59:22.257171  190061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:59:22.265206  190061 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:59:22.265343  190061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:59:22.272930  190061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0110 08:59:22.285669  190061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:59:22.298426  190061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I0110 08:59:22.312149  190061 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:59:22.315881  190061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:59:22.325672  190061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:59:22.435776  190061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:59:22.452913  190061 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333 for IP: 192.168.76.2
	I0110 08:59:22.452939  190061 certs.go:195] generating shared ca certs ...
	I0110 08:59:22.452956  190061 certs.go:227] acquiring lock for ca certs: {Name:mk2efb7c26990a28337b434f05b8d75a57c7c690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:22.453098  190061 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key
	I0110 08:59:22.453146  190061 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key
	I0110 08:59:22.453153  190061 certs.go:257] generating profile certs ...
	I0110 08:59:22.453221  190061 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/client.key
	I0110 08:59:22.453239  190061 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/client.crt with IP's: []
	I0110 08:59:22.752224  190061 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/client.crt ...
	I0110 08:59:22.752252  190061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/client.crt: {Name:mkd1d2dbda008588db59558d1ad9c2d5db56b5ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:22.752474  190061 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/client.key ...
	I0110 08:59:22.752488  190061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/client.key: {Name:mkb8ee2cd98416eb892e817314e0a34cb309d73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:22.752623  190061 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.key.4ddf7a93
	I0110 08:59:22.752646  190061 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.crt.4ddf7a93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 08:59:23.064349  190061 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.crt.4ddf7a93 ...
	I0110 08:59:23.064388  190061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.crt.4ddf7a93: {Name:mkdc4e2ad9bdae06e1c3ad08977bdf75f9cbe000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.064613  190061 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.key.4ddf7a93 ...
	I0110 08:59:23.064634  190061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.key.4ddf7a93: {Name:mk80f8190aa30a626c36fef7aea959332d995979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.064725  190061 certs.go:382] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.crt.4ddf7a93 -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.crt
	I0110 08:59:23.064805  190061 certs.go:386] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.key.4ddf7a93 -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.key
	I0110 08:59:23.064866  190061 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.key
	I0110 08:59:23.064884  190061 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.crt with IP's: []
	I0110 08:59:23.451504  190061 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.crt ...
	I0110 08:59:23.451535  190061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.crt: {Name:mkd485d2a0239f6f021d946ea54ede8b50d167b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.451723  190061 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.key ...
	I0110 08:59:23.451737  190061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.key: {Name:mk69efca240098c5e3a133c2beee753cdab82900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:59:23.451837  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 08:59:23.451859  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 08:59:23.451873  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 08:59:23.451890  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 08:59:23.451902  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 08:59:23.451921  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 08:59:23.451933  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 08:59:23.451948  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 08:59:23.451996  190061 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem (1338 bytes)
	W0110 08:59:23.452042  190061 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257_empty.pem, impossibly tiny 0 bytes
	I0110 08:59:23.452054  190061 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 08:59:23.452085  190061 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:59:23.452115  190061 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:59:23.452143  190061 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem (1675 bytes)
	I0110 08:59:23.452191  190061 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem (1708 bytes)
	I0110 08:59:23.452226  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem -> /usr/share/ca-certificates/4257.pem
	I0110 08:59:23.452242  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> /usr/share/ca-certificates/42572.pem
	I0110 08:59:23.452261  190061 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:23.452827  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:59:23.473987  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 08:59:23.492932  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:59:23.511540  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:59:23.530297  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 08:59:23.549166  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:59:23.568152  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:59:23.587460  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-env-562333/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:59:23.606387  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem --> /usr/share/ca-certificates/4257.pem (1338 bytes)
	I0110 08:59:23.625118  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /usr/share/ca-certificates/42572.pem (1708 bytes)
	I0110 08:59:23.643279  190061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:59:23.663056  190061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:59:23.677720  190061 ssh_runner.go:195] Run: openssl version
	I0110 08:59:23.684497  190061 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4257.pem
	I0110 08:59:23.692855  190061 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4257.pem /etc/ssl/certs/4257.pem
	I0110 08:59:23.701692  190061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4257.pem
	I0110 08:59:23.705962  190061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:27 /usr/share/ca-certificates/4257.pem
	I0110 08:59:23.706082  190061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4257.pem
	I0110 08:59:23.747636  190061 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:59:23.755375  190061 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4257.pem /etc/ssl/certs/51391683.0
	I0110 08:59:23.763462  190061 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42572.pem
	I0110 08:59:23.772041  190061 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42572.pem /etc/ssl/certs/42572.pem
	I0110 08:59:23.779845  190061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42572.pem
	I0110 08:59:23.783739  190061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:27 /usr/share/ca-certificates/42572.pem
	I0110 08:59:23.783862  190061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42572.pem
	I0110 08:59:23.825371  190061 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:23.833069  190061 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42572.pem /etc/ssl/certs/3ec20f2e.0
	I0110 08:59:23.840992  190061 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:23.848567  190061 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:59:23.856217  190061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:23.860319  190061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:23.860430  190061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:59:23.902004  190061 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:59:23.909600  190061 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 08:59:23.916945  190061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:59:23.920500  190061 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 08:59:23.920601  190061 kubeadm.go:401] StartCluster: {Name:force-systemd-env-562333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-562333 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:59:23.920686  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0110 08:59:23.920755  190061 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:59:23.951765  190061 cri.go:96] found id: ""
	I0110 08:59:23.951835  190061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:59:23.961780  190061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 08:59:23.970230  190061 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 08:59:23.970294  190061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 08:59:23.982478  190061 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 08:59:23.982507  190061 kubeadm.go:158] found existing configuration files:
	
	I0110 08:59:23.982565  190061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 08:59:23.991187  190061 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 08:59:23.991251  190061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 08:59:23.999101  190061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 08:59:24.007632  190061 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 08:59:24.007694  190061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 08:59:24.014999  190061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 08:59:24.024978  190061 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 08:59:24.025051  190061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 08:59:24.033578  190061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 08:59:24.042128  190061 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 08:59:24.042209  190061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 08:59:24.050273  190061 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 08:59:24.094854  190061 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 08:59:24.095093  190061 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 08:59:24.169436  190061 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 08:59:24.169514  190061 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 08:59:24.169554  190061 kubeadm.go:319] OS: Linux
	I0110 08:59:24.169604  190061 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 08:59:24.169656  190061 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 08:59:24.169718  190061 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 08:59:24.169771  190061 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 08:59:24.169825  190061 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 08:59:24.169880  190061 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 08:59:24.169929  190061 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 08:59:24.169979  190061 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 08:59:24.170029  190061 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 08:59:24.243416  190061 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 08:59:24.243597  190061 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 08:59:24.243715  190061 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 08:59:24.248989  190061 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 08:59:24.255844  190061 out.go:252]   - Generating certificates and keys ...
	I0110 08:59:24.255998  190061 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 08:59:24.256089  190061 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 08:59:24.530089  190061 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 08:59:24.796435  190061 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 08:59:24.966798  190061 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 08:59:25.300078  190061 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 08:59:25.511506  190061 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 08:59:25.511873  190061 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-562333 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 08:59:25.875125  190061 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 08:59:25.875511  190061 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-562333 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 08:59:26.136309  190061 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 08:59:26.213772  190061 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 08:59:26.440668  190061 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 08:59:26.440902  190061 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 08:59:26.816600  190061 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 08:59:27.359707  190061 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 08:59:27.443008  190061 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 08:59:28.576316  190061 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 08:59:29.200886  190061 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 08:59:29.202500  190061 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 08:59:29.210029  190061 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 08:59:29.213276  190061 out.go:252]   - Booting up control plane ...
	I0110 08:59:29.213392  190061 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 08:59:29.213471  190061 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 08:59:29.214646  190061 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 08:59:29.236855  190061 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 08:59:29.236965  190061 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 08:59:29.245110  190061 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 08:59:29.245456  190061 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 08:59:29.245520  190061 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 08:59:29.387441  190061 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 08:59:29.388072  190061 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:03:29.389713  190061 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001129952s
	I0110 09:03:29.389748  190061 kubeadm.go:319] 
	I0110 09:03:29.389853  190061 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:03:29.389963  190061 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:03:29.390237  190061 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:03:29.390248  190061 kubeadm.go:319] 
	I0110 09:03:29.390563  190061 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:03:29.390621  190061 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:03:29.390676  190061 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:03:29.390681  190061 kubeadm.go:319] 
	I0110 09:03:29.396716  190061 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:03:29.397287  190061 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:03:29.397423  190061 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:03:29.397771  190061 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:03:29.397784  190061 kubeadm.go:319] 
	I0110 09:03:29.397896  190061 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 09:03:29.398019  190061 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-562333 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-562333 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-562333 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-562333 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001129952s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 09:03:29.398110  190061 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0110 09:03:29.820467  190061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:03:29.834745  190061 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:03:29.834818  190061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:03:29.843022  190061 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:03:29.843043  190061 kubeadm.go:158] found existing configuration files:
	
	I0110 09:03:29.843097  190061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:03:29.850742  190061 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:03:29.850808  190061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:03:29.858732  190061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:03:29.866902  190061 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:03:29.866980  190061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:03:29.875102  190061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:03:29.883168  190061 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:03:29.883283  190061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:03:29.890837  190061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:03:29.899332  190061 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:03:29.899425  190061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:03:29.906856  190061 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:03:29.944477  190061 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:03:29.944547  190061 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:03:30.045549  190061 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:03:30.045628  190061 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:03:30.045663  190061 kubeadm.go:319] OS: Linux
	I0110 09:03:30.045709  190061 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:03:30.045757  190061 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:03:30.045816  190061 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:03:30.045868  190061 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:03:30.045922  190061 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:03:30.045981  190061 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:03:30.046111  190061 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:03:30.046306  190061 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:03:30.046401  190061 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:03:30.170839  190061 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:03:30.170957  190061 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:03:30.171054  190061 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:03:30.179934  190061 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:03:30.184972  190061 out.go:252]   - Generating certificates and keys ...
	I0110 09:03:30.185123  190061 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:03:30.185205  190061 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:03:30.185312  190061 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 09:03:30.185375  190061 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 09:03:30.185445  190061 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 09:03:30.185502  190061 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 09:03:30.185566  190061 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 09:03:30.185631  190061 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 09:03:30.185705  190061 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 09:03:30.185779  190061 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 09:03:30.185827  190061 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 09:03:30.185885  190061 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:03:30.319830  190061 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:03:30.566279  190061 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:03:30.866007  190061 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:03:31.126970  190061 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:03:31.177842  190061 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:03:31.178776  190061 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:03:31.181829  190061 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:03:31.184998  190061 out.go:252]   - Booting up control plane ...
	I0110 09:03:31.185118  190061 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:03:31.185214  190061 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:03:31.185680  190061 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:03:31.209170  190061 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:03:31.209590  190061 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:03:31.220366  190061 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:03:31.220701  190061 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:03:31.220746  190061 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:03:31.354166  190061 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:03:31.354285  190061 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:07:31.354560  190061 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000294588s
	I0110 09:07:31.354595  190061 kubeadm.go:319] 
	I0110 09:07:31.354701  190061 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:07:31.354760  190061 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:07:31.355222  190061 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:07:31.355239  190061 kubeadm.go:319] 
	I0110 09:07:31.355581  190061 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:07:31.355643  190061 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:07:31.355711  190061 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:07:31.355718  190061 kubeadm.go:319] 
	I0110 09:07:31.360357  190061 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:07:31.360818  190061 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:07:31.360967  190061 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:07:31.361231  190061 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:07:31.361240  190061 kubeadm.go:319] 
	I0110 09:07:31.361319  190061 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:07:31.361380  190061 kubeadm.go:403] duration metric: took 8m7.440784672s to StartCluster
	I0110 09:07:31.361415  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0110 09:07:31.361474  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 09:07:31.386403  190061 cri.go:96] found id: ""
	I0110 09:07:31.386438  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.386447  190061 logs.go:284] No container was found matching "kube-apiserver"
	I0110 09:07:31.386455  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0110 09:07:31.386520  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 09:07:31.412121  190061 cri.go:96] found id: ""
	I0110 09:07:31.412146  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.412156  190061 logs.go:284] No container was found matching "etcd"
	I0110 09:07:31.412163  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0110 09:07:31.412223  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 09:07:31.441790  190061 cri.go:96] found id: ""
	I0110 09:07:31.441812  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.441821  190061 logs.go:284] No container was found matching "coredns"
	I0110 09:07:31.441827  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0110 09:07:31.441883  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 09:07:31.476282  190061 cri.go:96] found id: ""
	I0110 09:07:31.476304  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.476313  190061 logs.go:284] No container was found matching "kube-scheduler"
	I0110 09:07:31.476320  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0110 09:07:31.476381  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 09:07:31.506699  190061 cri.go:96] found id: ""
	I0110 09:07:31.506775  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.506798  190061 logs.go:284] No container was found matching "kube-proxy"
	I0110 09:07:31.506836  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 09:07:31.506914  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 09:07:31.531431  190061 cri.go:96] found id: ""
	I0110 09:07:31.531458  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.531467  190061 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 09:07:31.531474  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0110 09:07:31.531534  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 09:07:31.560327  190061 cri.go:96] found id: ""
	I0110 09:07:31.560351  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.560360  190061 logs.go:284] No container was found matching "kindnet"
	I0110 09:07:31.560371  190061 logs.go:123] Gathering logs for containerd ...
	I0110 09:07:31.560399  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0110 09:07:31.601110  190061 logs.go:123] Gathering logs for container status ...
	I0110 09:07:31.601144  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 09:07:31.630446  190061 logs.go:123] Gathering logs for kubelet ...
	I0110 09:07:31.630473  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 09:07:31.690726  190061 logs.go:123] Gathering logs for dmesg ...
	I0110 09:07:31.690762  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 09:07:31.706529  190061 logs.go:123] Gathering logs for describe nodes ...
	I0110 09:07:31.706558  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 09:07:31.776623  190061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:31.768093    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.768762    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.770420    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.771019    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.772729    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 09:07:31.768093    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.768762    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.770420    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.771019    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.772729    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0110 09:07:31.776659  190061 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000294588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 09:07:31.776731  190061 out.go:285] * 
	* 
	W0110 09:07:31.776891  190061 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000294588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000294588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:31.776912  190061 out.go:285] * 
	* 
	W0110 09:07:31.777160  190061 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:07:31.782608  190061 out.go:203] 
	W0110 09:07:31.785602  190061 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000294588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000294588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:31.785752  190061 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 09:07:31.785824  190061 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 09:07:31.788983  190061 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-562333 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-562333 ssh "cat /etc/containerd/config.toml"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2026-01-10 09:07:32.218184196 +0000 UTC m=+2833.400571321
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-562333
helpers_test.go:244: (dbg) docker inspect force-systemd-env-562333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4591a6d938a3d301bc1a38a050716665db5661b289d7822719ffe19cc1703fbd",
	        "Created": "2026-01-10T08:59:15.652333839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 190624,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:59:15.726496487Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/4591a6d938a3d301bc1a38a050716665db5661b289d7822719ffe19cc1703fbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4591a6d938a3d301bc1a38a050716665db5661b289d7822719ffe19cc1703fbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/4591a6d938a3d301bc1a38a050716665db5661b289d7822719ffe19cc1703fbd/hosts",
	        "LogPath": "/var/lib/docker/containers/4591a6d938a3d301bc1a38a050716665db5661b289d7822719ffe19cc1703fbd/4591a6d938a3d301bc1a38a050716665db5661b289d7822719ffe19cc1703fbd-json.log",
	        "Name": "/force-systemd-env-562333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-562333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-562333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4591a6d938a3d301bc1a38a050716665db5661b289d7822719ffe19cc1703fbd",
	                "LowerDir": "/var/lib/docker/overlay2/542851d60342e13b8c15725b0447f6149f1b5282e91c8412660ffb95cdee95ce-init/diff:/var/lib/docker/overlay2/54d275d5bf894b41181c968ee2ec1be6f053e8252dc2214525d0175b72739adc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/542851d60342e13b8c15725b0447f6149f1b5282e91c8412660ffb95cdee95ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/542851d60342e13b8c15725b0447f6149f1b5282e91c8412660ffb95cdee95ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/542851d60342e13b8c15725b0447f6149f1b5282e91c8412660ffb95cdee95ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-562333",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-562333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-562333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-562333",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-562333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "20de95747f41fe07d8fa4730576eceb0f5647dba00116f3fe116733a856591b6",
	            "SandboxKey": "/var/run/docker/netns/20de95747f41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-562333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:4f:02:2c:c9:b4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e0acd71924818cab49fa3b19477c409ec75047bdba6e3403f7106d64d7fdcbc3",
	                    "EndpointID": "cdab81a1105239678da488a1543747235f71967f60e23429abc64d01c31abbbc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-562333",
	                        "4591a6d938a3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-562333 -n force-systemd-env-562333
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-562333 -n force-systemd-env-562333: exit status 6 (314.832373ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:07:32.542549  213438 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-562333" does not appear in /home/jenkins/minikube-integration/22427-2439/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-562333 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-811171 sudo cat /var/lib/kubelet/config.yaml                                                                            │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo systemctl status docker --all --full --no-pager                                                             │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo systemctl cat docker --no-pager                                                                             │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo cat /etc/docker/daemon.json                                                                                 │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo docker system info                                                                                          │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo systemctl status cri-docker --all --full --no-pager                                                         │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo systemctl cat cri-docker --no-pager                                                                         │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                    │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo cat /usr/lib/systemd/system/cri-docker.service                                                              │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo cri-dockerd --version                                                                                       │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo systemctl status containerd --all --full --no-pager                                                         │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo systemctl cat containerd --no-pager                                                                         │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo cat /lib/systemd/system/containerd.service                                                                  │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo cat /etc/containerd/config.toml                                                                             │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo containerd config dump                                                                                      │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo systemctl status crio --all --full --no-pager                                                               │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo systemctl cat crio --no-pager                                                                               │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                     │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ ssh     │ -p cilium-811171 sudo crio config                                                                                                 │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │                     │
	│ delete  │ -p cilium-811171                                                                                                                  │ cilium-811171             │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │ 10 Jan 26 09:01 UTC │
	│ start   │ -p cert-expiration-223749 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                      │ cert-expiration-223749    │ jenkins │ v1.37.0 │ 10 Jan 26 09:01 UTC │ 10 Jan 26 09:02 UTC │
	│ start   │ -p cert-expiration-223749 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                   │ cert-expiration-223749    │ jenkins │ v1.37.0 │ 10 Jan 26 09:05 UTC │ 10 Jan 26 09:05 UTC │
	│ delete  │ -p cert-expiration-223749                                                                                                         │ cert-expiration-223749    │ jenkins │ v1.37.0 │ 10 Jan 26 09:05 UTC │ 10 Jan 26 09:05 UTC │
	│ start   │ -p force-systemd-flag-447307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-flag-447307 │ jenkins │ v1.37.0 │ 10 Jan 26 09:05 UTC │                     │
	│ ssh     │ force-systemd-env-562333 ssh cat /etc/containerd/config.toml                                                                      │ force-systemd-env-562333  │ jenkins │ v1.37.0 │ 10 Jan 26 09:07 UTC │ 10 Jan 26 09:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:05:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:05:33.409464  209870 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:05:33.409589  209870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:05:33.409600  209870 out.go:374] Setting ErrFile to fd 2...
	I0110 09:05:33.409606  209870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:05:33.409935  209870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 09:05:33.410391  209870 out.go:368] Setting JSON to false
	I0110 09:05:33.411232  209870 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2887,"bootTime":1768033047,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 09:05:33.411330  209870 start.go:143] virtualization:  
	I0110 09:05:33.415136  209870 out.go:179] * [force-systemd-flag-447307] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:05:33.419458  209870 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:05:33.419525  209870 notify.go:221] Checking for updates...
	I0110 09:05:33.425912  209870 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:05:33.429209  209870 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 09:05:33.432347  209870 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 09:05:33.435460  209870 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:05:33.438570  209870 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:05:33.442372  209870 config.go:182] Loaded profile config "force-systemd-env-562333": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 09:05:33.442573  209870 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:05:33.476555  209870 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:05:33.476689  209870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:05:33.532767  209870 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:05:33.522514518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:05:33.532884  209870 docker.go:319] overlay module found
	I0110 09:05:33.536216  209870 out.go:179] * Using the docker driver based on user configuration
	I0110 09:05:33.539266  209870 start.go:309] selected driver: docker
	I0110 09:05:33.539290  209870 start.go:928] validating driver "docker" against <nil>
	I0110 09:05:33.539304  209870 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:05:33.540257  209870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:05:33.606717  209870 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:05:33.597485332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:05:33.606880  209870 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:05:33.607150  209870 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 09:05:33.610109  209870 out.go:179] * Using Docker driver with root privileges
	I0110 09:05:33.613052  209870 cni.go:84] Creating CNI manager for ""
	I0110 09:05:33.613123  209870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 09:05:33.613137  209870 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 09:05:33.613210  209870 start.go:353] cluster config:
	{Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I0110 09:05:33.616369  209870 out.go:179] * Starting "force-systemd-flag-447307" primary control-plane node in "force-systemd-flag-447307" cluster
	I0110 09:05:33.619253  209870 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0110 09:05:33.622283  209870 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:05:33.625240  209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:05:33.625284  209870 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I0110 09:05:33.625294  209870 cache.go:65] Caching tarball of preloaded images
	I0110 09:05:33.625329  209870 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:05:33.625389  209870 preload.go:251] Found /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 09:05:33.625401  209870 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I0110 09:05:33.625502  209870 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json ...
	I0110 09:05:33.625518  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json: {Name:mkf2d31f6f9a10b94727bf46c1c457843d8705ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:33.646574  209870 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 09:05:33.646596  209870 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 09:05:33.646616  209870 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:05:33.646655  209870 start.go:360] acquireMachinesLock for force-systemd-flag-447307: {Name:mkd48671d04edb3bc812df6ed361a4acb7311dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:05:33.646759  209870 start.go:364] duration metric: took 84.121µs to acquireMachinesLock for "force-systemd-flag-447307"
	I0110 09:05:33.646788  209870 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0110 09:05:33.646856  209870 start.go:125] createHost starting for "" (driver="docker")
	I0110 09:05:33.650271  209870 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 09:05:33.650508  209870 start.go:159] libmachine.API.Create for "force-systemd-flag-447307" (driver="docker")
	I0110 09:05:33.650544  209870 client.go:173] LocalClient.Create starting
	I0110 09:05:33.650632  209870 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem
	I0110 09:05:33.650669  209870 main.go:144] libmachine: Decoding PEM data...
	I0110 09:05:33.650699  209870 main.go:144] libmachine: Parsing certificate...
	I0110 09:05:33.650748  209870 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem
	I0110 09:05:33.650798  209870 main.go:144] libmachine: Decoding PEM data...
	I0110 09:05:33.650814  209870 main.go:144] libmachine: Parsing certificate...
	I0110 09:05:33.651204  209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 09:05:33.667215  209870 cli_runner.go:211] docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 09:05:33.667320  209870 network_create.go:284] running [docker network inspect force-systemd-flag-447307] to gather additional debugging logs...
	I0110 09:05:33.667372  209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307
	W0110 09:05:33.687489  209870 cli_runner.go:211] docker network inspect force-systemd-flag-447307 returned with exit code 1
	I0110 09:05:33.687524  209870 network_create.go:287] error running [docker network inspect force-systemd-flag-447307]: docker network inspect force-systemd-flag-447307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-447307 not found
	I0110 09:05:33.687543  209870 network_create.go:289] output of [docker network inspect force-systemd-flag-447307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-447307 not found
	
	** /stderr **
	I0110 09:05:33.687651  209870 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:05:33.707102  209870 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e01acd8ff726 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:8b:1d:1f:6a:28} reservation:<nil>}
	I0110 09:05:33.707525  209870 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab4f89e52867 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:d7:2a:6d:f4:96} reservation:<nil>}
	I0110 09:05:33.707892  209870 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8b226bd60dd7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:a9:74:b4} reservation:<nil>}
	I0110 09:05:33.708300  209870 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e0acd7192481 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:16:f7:84:76:30} reservation:<nil>}
	I0110 09:05:33.708837  209870 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001930810}
	I0110 09:05:33.708905  209870 network_create.go:124] attempt to create docker network force-systemd-flag-447307 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 09:05:33.708992  209870 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-447307 force-systemd-flag-447307
	I0110 09:05:33.788655  209870 network_create.go:108] docker network force-systemd-flag-447307 192.168.85.0/24 created
	I0110 09:05:33.788695  209870 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-447307" container
	I0110 09:05:33.788778  209870 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 09:05:33.805636  209870 cli_runner.go:164] Run: docker volume create force-systemd-flag-447307 --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --label created_by.minikube.sigs.k8s.io=true
	I0110 09:05:33.825622  209870 oci.go:103] Successfully created a docker volume force-systemd-flag-447307
	I0110 09:05:33.825717  209870 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-447307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --entrypoint /usr/bin/test -v force-systemd-flag-447307:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 09:05:34.392818  209870 oci.go:107] Successfully prepared a docker volume force-systemd-flag-447307
	I0110 09:05:34.392892  209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:05:34.392910  209870 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 09:05:34.392989  209870 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-447307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 09:05:38.304331  209870 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-447307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.911286648s)
	I0110 09:05:38.304367  209870 kic.go:203] duration metric: took 3.911453699s to extract preloaded images to volume ...
	W0110 09:05:38.304501  209870 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 09:05:38.304616  209870 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 09:05:38.370965  209870 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-447307 --name force-systemd-flag-447307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-447307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-447307 --network force-systemd-flag-447307 --ip 192.168.85.2 --volume force-systemd-flag-447307:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 09:05:38.714423  209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Running}}
	I0110 09:05:38.748378  209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
	I0110 09:05:38.768410  209870 cli_runner.go:164] Run: docker exec force-systemd-flag-447307 stat /var/lib/dpkg/alternatives/iptables
	I0110 09:05:38.820953  209870 oci.go:144] the created container "force-systemd-flag-447307" has a running status.
	I0110 09:05:38.820980  209870 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa...
	I0110 09:05:39.091967  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 09:05:39.092015  209870 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 09:05:39.120080  209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
	I0110 09:05:39.153524  209870 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 09:05:39.153549  209870 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-447307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 09:05:39.219499  209870 cli_runner.go:164] Run: docker container inspect force-systemd-flag-447307 --format={{.State.Status}}
	I0110 09:05:39.244109  209870 machine.go:94] provisionDockerMachine start ...
	I0110 09:05:39.244193  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:39.268479  209870 main.go:144] libmachine: Using SSH client type: native
	I0110 09:05:39.268982  209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33044 <nil> <nil>}
	I0110 09:05:39.268997  209870 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 09:05:39.269646  209870 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 09:05:42.418786  209870 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-447307
	
	I0110 09:05:42.418815  209870 ubuntu.go:182] provisioning hostname "force-systemd-flag-447307"
	I0110 09:05:42.418891  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:42.436389  209870 main.go:144] libmachine: Using SSH client type: native
	I0110 09:05:42.436711  209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33044 <nil> <nil>}
	I0110 09:05:42.436734  209870 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-447307 && echo "force-systemd-flag-447307" | sudo tee /etc/hostname
	I0110 09:05:42.592363  209870 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-447307
	
	I0110 09:05:42.592444  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:42.609168  209870 main.go:144] libmachine: Using SSH client type: native
	I0110 09:05:42.609485  209870 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33044 <nil> <nil>}
	I0110 09:05:42.609511  209870 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-447307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-447307/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-447307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 09:05:42.763850  209870 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 09:05:42.763885  209870 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-2439/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-2439/.minikube}
	I0110 09:05:42.763907  209870 ubuntu.go:190] setting up certificates
	I0110 09:05:42.763917  209870 provision.go:84] configureAuth start
	I0110 09:05:42.763975  209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
	I0110 09:05:42.780237  209870 provision.go:143] copyHostCerts
	I0110 09:05:42.780278  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
	I0110 09:05:42.780310  209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem, removing ...
	I0110 09:05:42.780322  209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem
	I0110 09:05:42.780397  209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/ca.pem (1078 bytes)
	I0110 09:05:42.780483  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
	I0110 09:05:42.780504  209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem, removing ...
	I0110 09:05:42.780509  209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem
	I0110 09:05:42.780582  209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/cert.pem (1123 bytes)
	I0110 09:05:42.780638  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
	I0110 09:05:42.780660  209870 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem, removing ...
	I0110 09:05:42.780668  209870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem
	I0110 09:05:42.780694  209870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-2439/.minikube/key.pem (1675 bytes)
	I0110 09:05:42.780745  209870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-447307 san=[127.0.0.1 192.168.85.2 force-systemd-flag-447307 localhost minikube]
	I0110 09:05:43.091195  209870 provision.go:177] copyRemoteCerts
	I0110 09:05:43.091276  209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 09:05:43.091317  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.112972  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.219219  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 09:05:43.219278  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 09:05:43.236969  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 09:05:43.237036  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 09:05:43.254736  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 09:05:43.254810  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 09:05:43.272505  209870 provision.go:87] duration metric: took 508.564973ms to configureAuth
	I0110 09:05:43.272534  209870 ubuntu.go:206] setting minikube options for container-runtime
	I0110 09:05:43.272716  209870 config.go:182] Loaded profile config "force-systemd-flag-447307": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 09:05:43.272731  209870 machine.go:97] duration metric: took 4.028601641s to provisionDockerMachine
	I0110 09:05:43.272739  209870 client.go:176] duration metric: took 9.622186198s to LocalClient.Create
	I0110 09:05:43.272758  209870 start.go:167] duration metric: took 9.622250757s to libmachine.API.Create "force-systemd-flag-447307"
	I0110 09:05:43.272767  209870 start.go:293] postStartSetup for "force-systemd-flag-447307" (driver="docker")
	I0110 09:05:43.272776  209870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 09:05:43.272844  209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 09:05:43.272890  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.291040  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.399676  209870 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 09:05:43.403118  209870 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 09:05:43.403149  209870 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 09:05:43.403161  209870 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/addons for local assets ...
	I0110 09:05:43.403215  209870 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-2439/.minikube/files for local assets ...
	I0110 09:05:43.403296  209870 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> 42572.pem in /etc/ssl/certs
	I0110 09:05:43.403307  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> /etc/ssl/certs/42572.pem
	I0110 09:05:43.403441  209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 09:05:43.411262  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /etc/ssl/certs/42572.pem (1708 bytes)
	I0110 09:05:43.428973  209870 start.go:296] duration metric: took 156.191974ms for postStartSetup
	I0110 09:05:43.429327  209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
	I0110 09:05:43.449128  209870 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/config.json ...
	I0110 09:05:43.449426  209870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:05:43.449470  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.469062  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.568491  209870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 09:05:43.573119  209870 start.go:128] duration metric: took 9.926249482s to createHost
	I0110 09:05:43.573146  209870 start.go:83] releasing machines lock for "force-systemd-flag-447307", held for 9.926372964s
	I0110 09:05:43.573217  209870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-447307
	I0110 09:05:43.590249  209870 ssh_runner.go:195] Run: cat /version.json
	I0110 09:05:43.590305  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.590572  209870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 09:05:43.590643  209870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-447307
	I0110 09:05:43.615492  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.617966  209870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33044 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/force-systemd-flag-447307/id_rsa Username:docker}
	I0110 09:05:43.715397  209870 ssh_runner.go:195] Run: systemctl --version
	I0110 09:05:43.819535  209870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 09:05:43.823893  209870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 09:05:43.824019  209870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 09:05:43.851657  209870 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 09:05:43.851693  209870 start.go:496] detecting cgroup driver to use...
	I0110 09:05:43.851707  209870 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 09:05:43.851778  209870 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0110 09:05:43.867281  209870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 09:05:43.880163  209870 docker.go:218] disabling cri-docker service (if available) ...
	I0110 09:05:43.880224  209870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 09:05:43.897601  209870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 09:05:43.916022  209870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 09:05:44.034195  209870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 09:05:44.157732  209870 docker.go:234] disabling docker service ...
	I0110 09:05:44.157806  209870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 09:05:44.182671  209870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 09:05:44.199192  209870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 09:05:44.328856  209870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 09:05:44.450963  209870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 09:05:44.463783  209870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 09:05:44.479468  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 09:05:44.488749  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 09:05:44.497642  209870 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 09:05:44.497707  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 09:05:44.506787  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 09:05:44.516077  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 09:05:44.524994  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 09:05:44.533763  209870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 09:05:44.542113  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 09:05:44.551294  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 09:05:44.560593  209870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 09:05:44.569667  209870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 09:05:44.577424  209870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 09:05:44.585163  209870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:05:44.695011  209870 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 09:05:44.824179  209870 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I0110 09:05:44.824246  209870 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0110 09:05:44.828299  209870 start.go:574] Will wait 60s for crictl version
	I0110 09:05:44.828398  209870 ssh_runner.go:195] Run: which crictl
	I0110 09:05:44.831917  209870 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 09:05:44.856160  209870 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I0110 09:05:44.856261  209870 ssh_runner.go:195] Run: containerd --version
	I0110 09:05:44.877321  209870 ssh_runner.go:195] Run: containerd --version
	I0110 09:05:44.901544  209870 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I0110 09:05:44.904468  209870 cli_runner.go:164] Run: docker network inspect force-systemd-flag-447307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:05:44.921071  209870 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 09:05:44.924958  209870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:05:44.935964  209870 kubeadm.go:884] updating cluster {Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 09:05:44.936082  209870 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0110 09:05:44.936148  209870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:05:44.963934  209870 containerd.go:635] all images are preloaded for containerd runtime.
	I0110 09:05:44.963961  209870 containerd.go:542] Images already preloaded, skipping extraction
	I0110 09:05:44.964020  209870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:05:44.993776  209870 containerd.go:635] all images are preloaded for containerd runtime.
	I0110 09:05:44.993799  209870 cache_images.go:86] Images are preloaded, skipping loading
	I0110 09:05:44.993808  209870 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I0110 09:05:44.993914  209870 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-447307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 09:05:44.993982  209870 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I0110 09:05:45.035190  209870 cni.go:84] Creating CNI manager for ""
	I0110 09:05:45.035214  209870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 09:05:45.035241  209870 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 09:05:45.035266  209870 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-447307 NodeName:force-systemd-flag-447307 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 09:05:45.035486  209870 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-447307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 09:05:45.035574  209870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 09:05:45.067399  209870 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 09:05:45.067510  209870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 09:05:45.092872  209870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0110 09:05:45.121773  209870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 09:05:45.154954  209870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I0110 09:05:45.231953  209870 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 09:05:45.237475  209870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:05:45.260281  209870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:05:45.419211  209870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:05:45.437873  209870 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307 for IP: 192.168.85.2
	I0110 09:05:45.437942  209870 certs.go:195] generating shared ca certs ...
	I0110 09:05:45.437991  209870 certs.go:227] acquiring lock for ca certs: {Name:mk2efb7c26990a28337b434f05b8d75a57c7c690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.438190  209870 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key
	I0110 09:05:45.438256  209870 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key
	I0110 09:05:45.438302  209870 certs.go:257] generating profile certs ...
	I0110 09:05:45.438386  209870 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key
	I0110 09:05:45.438435  209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt with IP's: []
	I0110 09:05:45.568734  209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt ...
	I0110 09:05:45.568768  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.crt: {Name:mk93119e0751f692d1add2634b06b07d570f7c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.568970  209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key ...
	I0110 09:05:45.568988  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/client.key: {Name:mkd0ec99179f57a4bf574d82b9d5dd3231ca72d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.569084  209870 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d
	I0110 09:05:45.569103  209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 09:05:45.634799  209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d ...
	I0110 09:05:45.634831  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d: {Name:mk1f93a1a18d813cb88fd475e0986fb6bcc9bd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.635018  209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d ...
	I0110 09:05:45.635033  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d: {Name:mkc43e75e3e468932f9ce36624b08b9cf784c70c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.635122  209870 certs.go:382] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt.bdccf66d -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt
	I0110 09:05:45.635249  209870 certs.go:386] copying /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key.bdccf66d -> /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key
	I0110 09:05:45.635318  209870 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key
	I0110 09:05:45.635336  209870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt with IP's: []
	I0110 09:05:45.872246  209870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt ...
	I0110 09:05:45.872281  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt: {Name:mka6e1c552726af90963b0c4641d45cc7689a203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.872469  209870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key ...
	I0110 09:05:45.872484  209870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key: {Name:mk8e5271296bc709b5c836c748d108f6bf8306ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:05:45.872565  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 09:05:45.872587  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 09:05:45.872599  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 09:05:45.872615  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 09:05:45.872633  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 09:05:45.872650  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 09:05:45.872666  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 09:05:45.872681  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 09:05:45.872732  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem (1338 bytes)
	W0110 09:05:45.872776  209870 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257_empty.pem, impossibly tiny 0 bytes
	I0110 09:05:45.872788  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 09:05:45.872823  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/ca.pem (1078 bytes)
	I0110 09:05:45.872851  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/cert.pem (1123 bytes)
	I0110 09:05:45.872886  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/key.pem (1675 bytes)
	I0110 09:05:45.872937  209870 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem (1708 bytes)
	I0110 09:05:45.872975  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem -> /usr/share/ca-certificates/4257.pem
	I0110 09:05:45.872997  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem -> /usr/share/ca-certificates/42572.pem
	I0110 09:05:45.873020  209870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:45.873565  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 09:05:45.894181  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 09:05:45.914336  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 09:05:45.933562  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 09:05:45.952739  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 09:05:45.971678  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 09:05:45.990590  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 09:05:46.009080  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/force-systemd-flag-447307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 09:05:46.029612  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/certs/4257.pem --> /usr/share/ca-certificates/4257.pem (1338 bytes)
	I0110 09:05:46.049043  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/ssl/certs/42572.pem --> /usr/share/ca-certificates/42572.pem (1708 bytes)
	I0110 09:05:46.066769  209870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-2439/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 09:05:46.086243  209870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 09:05:46.099249  209870 ssh_runner.go:195] Run: openssl version
	I0110 09:05:46.106159  209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:46.113597  209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 09:05:46.121108  209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:46.124874  209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:21 /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:46.124949  209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:05:46.165876  209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 09:05:46.173607  209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 09:05:46.181172  209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4257.pem
	I0110 09:05:46.189176  209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4257.pem /etc/ssl/certs/4257.pem
	I0110 09:05:46.197604  209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4257.pem
	I0110 09:05:46.202316  209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:27 /usr/share/ca-certificates/4257.pem
	I0110 09:05:46.202452  209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4257.pem
	I0110 09:05:46.244731  209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 09:05:46.252500  209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4257.pem /etc/ssl/certs/51391683.0
	I0110 09:05:46.260155  209870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42572.pem
	I0110 09:05:46.267974  209870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42572.pem /etc/ssl/certs/42572.pem
	I0110 09:05:46.276015  209870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42572.pem
	I0110 09:05:46.280136  209870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:27 /usr/share/ca-certificates/42572.pem
	I0110 09:05:46.280201  209870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42572.pem
	I0110 09:05:46.321282  209870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 09:05:46.328915  209870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42572.pem /etc/ssl/certs/3ec20f2e.0
	I0110 09:05:46.336599  209870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 09:05:46.340309  209870 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 09:05:46.340362  209870 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-447307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-447307 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:05:46.340440  209870 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0110 09:05:46.340505  209870 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:05:46.368732  209870 cri.go:96] found id: ""
	I0110 09:05:46.368825  209870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 09:05:46.377083  209870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 09:05:46.385046  209870 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:05:46.385169  209870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:05:46.393422  209870 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:05:46.393446  209870 kubeadm.go:158] found existing configuration files:
	
	I0110 09:05:46.393528  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:05:46.402057  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:05:46.402155  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:05:46.409739  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:05:46.417579  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:05:46.417663  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:05:46.425416  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:05:46.433477  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:05:46.433598  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:05:46.442123  209870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:05:46.453573  209870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:05:46.453686  209870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:05:46.464707  209870 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:05:46.525523  209870 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:05:46.525947  209870 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:05:46.597987  209870 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:05:46.598061  209870 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:05:46.598113  209870 kubeadm.go:319] OS: Linux
	I0110 09:05:46.598166  209870 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:05:46.598220  209870 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:05:46.598270  209870 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:05:46.598320  209870 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:05:46.598379  209870 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:05:46.598434  209870 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:05:46.598482  209870 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:05:46.598540  209870 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:05:46.598589  209870 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:05:46.662544  209870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:05:46.662658  209870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:05:46.662754  209870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:05:46.671756  209870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:05:46.678150  209870 out.go:252]   - Generating certificates and keys ...
	I0110 09:05:46.678326  209870 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:05:46.678444  209870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:05:47.409478  209870 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 09:05:47.578923  209870 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 09:05:47.675285  209870 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 09:05:47.915407  209870 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 09:05:48.056354  209870 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 09:05:48.056768  209870 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 09:05:48.397487  209870 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 09:05:48.397857  209870 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-447307 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 09:05:48.490818  209870 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 09:05:48.893329  209870 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 09:05:49.168813  209870 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 09:05:49.169088  209870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:05:49.386189  209870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:05:49.640500  209870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:05:50.248302  209870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:05:50.303575  209870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:05:50.498195  209870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:05:50.498841  209870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:05:50.501376  209870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:05:50.505131  209870 out.go:252]   - Booting up control plane ...
	I0110 09:05:50.505260  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:05:50.505353  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:05:50.505445  209870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:05:50.521530  209870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:05:50.521669  209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:05:50.530142  209870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:05:50.530443  209870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:05:50.530495  209870 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:05:50.669341  209870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:05:50.669965  209870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:07:31.354560  190061 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000294588s
	I0110 09:07:31.354595  190061 kubeadm.go:319] 
	I0110 09:07:31.354701  190061 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:07:31.354760  190061 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:07:31.355222  190061 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:07:31.355239  190061 kubeadm.go:319] 
	I0110 09:07:31.355581  190061 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:07:31.355643  190061 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:07:31.355711  190061 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:07:31.355718  190061 kubeadm.go:319] 
	I0110 09:07:31.360357  190061 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:07:31.360818  190061 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:07:31.360967  190061 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:07:31.361231  190061 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:07:31.361240  190061 kubeadm.go:319] 
	I0110 09:07:31.361319  190061 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:07:31.361380  190061 kubeadm.go:403] duration metric: took 8m7.440784672s to StartCluster
	I0110 09:07:31.361415  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0110 09:07:31.361474  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 09:07:31.386403  190061 cri.go:96] found id: ""
	I0110 09:07:31.386438  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.386447  190061 logs.go:284] No container was found matching "kube-apiserver"
	I0110 09:07:31.386455  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0110 09:07:31.386520  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 09:07:31.412121  190061 cri.go:96] found id: ""
	I0110 09:07:31.412146  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.412156  190061 logs.go:284] No container was found matching "etcd"
	I0110 09:07:31.412163  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0110 09:07:31.412223  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 09:07:31.441790  190061 cri.go:96] found id: ""
	I0110 09:07:31.441812  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.441821  190061 logs.go:284] No container was found matching "coredns"
	I0110 09:07:31.441827  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0110 09:07:31.441883  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 09:07:31.476282  190061 cri.go:96] found id: ""
	I0110 09:07:31.476304  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.476313  190061 logs.go:284] No container was found matching "kube-scheduler"
	I0110 09:07:31.476320  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0110 09:07:31.476381  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 09:07:31.506699  190061 cri.go:96] found id: ""
	I0110 09:07:31.506775  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.506798  190061 logs.go:284] No container was found matching "kube-proxy"
	I0110 09:07:31.506836  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 09:07:31.506914  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 09:07:31.531431  190061 cri.go:96] found id: ""
	I0110 09:07:31.531458  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.531467  190061 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 09:07:31.531474  190061 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0110 09:07:31.531534  190061 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 09:07:31.560327  190061 cri.go:96] found id: ""
	I0110 09:07:31.560351  190061 logs.go:282] 0 containers: []
	W0110 09:07:31.560360  190061 logs.go:284] No container was found matching "kindnet"
	I0110 09:07:31.560371  190061 logs.go:123] Gathering logs for containerd ...
	I0110 09:07:31.560399  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0110 09:07:31.601110  190061 logs.go:123] Gathering logs for container status ...
	I0110 09:07:31.601144  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 09:07:31.630446  190061 logs.go:123] Gathering logs for kubelet ...
	I0110 09:07:31.630473  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 09:07:31.690726  190061 logs.go:123] Gathering logs for dmesg ...
	I0110 09:07:31.690762  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 09:07:31.706529  190061 logs.go:123] Gathering logs for describe nodes ...
	I0110 09:07:31.706558  190061 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 09:07:31.776623  190061 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:31.768093    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.768762    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.770420    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.771019    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.772729    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 09:07:31.768093    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.768762    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.770420    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.771019    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:31.772729    4862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0110 09:07:31.776659  190061 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000294588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 09:07:31.776731  190061 out.go:285] * 
	W0110 09:07:31.776891  190061 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000294588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:31.776912  190061 out.go:285] * 
	W0110 09:07:31.777160  190061 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:07:31.782608  190061 out.go:203] 
	W0110 09:07:31.785602  190061 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000294588s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 09:07:31.785752  190061 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 09:07:31.785824  190061 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 09:07:31.788983  190061 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999514810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999531697Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999601810Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999647513Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999668995Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999682509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999693192Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999706534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999727596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Jan 10 08:59:21 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:21.999787486Z" level=info msg="Connect containerd service"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.000119027Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.000826852Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.010263018Z" level=info msg="Start subscribing containerd event"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.010467911Z" level=info msg="Start recovering state"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.010433884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.010671548Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.048499285Z" level=info msg="Start event monitor"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.048577956Z" level=info msg="Start cni network conf syncer for default"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.048589715Z" level=info msg="Start streaming server"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.048602318Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.048617202Z" level=info msg="runtime interface starting up..."
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.048625374Z" level=info msg="starting plugins..."
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.048639873Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 08:59:22 force-systemd-env-562333 systemd[1]: Started containerd.service - containerd container runtime.
	Jan 10 08:59:22 force-systemd-env-562333 containerd[758]: time="2026-01-10T08:59:22.050754756Z" level=info msg="containerd successfully booted in 0.076493s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 09:07:33.209360    4978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:33.210160    4978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:33.211726    4978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:33.212177    4978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 09:07:33.213674    4978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan10 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015531] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.518244] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036376] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.856143] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.640312] kauditd_printk_skb: 39 callbacks suppressed
	[Jan10 08:20] hrtimer: interrupt took 13698190 ns
	
	
	==> kernel <==
	 09:07:33 up 50 min,  0 user,  load average: 0.43, 1.03, 1.71
	Linux force-systemd-env-562333 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 09:07:29 force-systemd-env-562333 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:30 force-systemd-env-562333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 10 09:07:30 force-systemd-env-562333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:30 force-systemd-env-562333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:30 force-systemd-env-562333 kubelet[4777]: E0110 09:07:30.728704    4777 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:30 force-systemd-env-562333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:30 force-systemd-env-562333 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:31 force-systemd-env-562333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 10 09:07:31 force-systemd-env-562333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:31 force-systemd-env-562333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:31 force-systemd-env-562333 kubelet[4805]: E0110 09:07:31.494449    4805 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:31 force-systemd-env-562333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:31 force-systemd-env-562333 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:32 force-systemd-env-562333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 10 09:07:32 force-systemd-env-562333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:32 force-systemd-env-562333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:32 force-systemd-env-562333 kubelet[4874]: E0110 09:07:32.249456    4874 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:32 force-systemd-env-562333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:32 force-systemd-env-562333 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 09:07:32 force-systemd-env-562333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Jan 10 09:07:32 force-systemd-env-562333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:32 force-systemd-env-562333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 09:07:32 force-systemd-env-562333 kubelet[4924]: E0110 09:07:32.996544    4924 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 09:07:33 force-systemd-env-562333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 09:07:33 force-systemd-env-562333 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-562333 -n force-systemd-env-562333
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-562333 -n force-systemd-env-562333: exit status 6 (464.564968ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:07:33.774559  213656 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-562333" does not appear in /home/jenkins/minikube-integration/22427-2439/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-562333" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-562333" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-562333
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-562333: (2.018212153s)
--- FAIL: TestForceSystemdEnv (506.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 --alsologtostderr
functional_test.go:439: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 --alsologtostderr: exit status 80 (1.141852795s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:29:58.737268   42710 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:29:58.737693   42710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:58.737722   42710 out.go:374] Setting ErrFile to fd 2...
	I0110 08:29:58.737742   42710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:58.738054   42710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:29:58.738700   42710 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:29:58.738763   42710 cache_images.go:404] Save images: ["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966"]
	I0110 08:29:58.738920   42710 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:29:58.739501   42710 cli_runner.go:164] Run: docker container inspect functional-822966 --format={{.State.Status}}
	I0110 08:29:58.756973   42710 cache_images.go:349] SaveCachedImages start: [ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966]
	I0110 08:29:58.757088   42710 ssh_runner.go:195] Run: systemctl --version
	I0110 08:29:58.757162   42710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-822966
	I0110 08:29:58.777746   42710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/functional-822966/id_rsa Username:docker}
	I0110 08:29:58.881989   42710 containerd.go:268] Checking existence of image with name "ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966" and sha ""
	I0110 08:29:58.882129   42710 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
	I0110 08:29:58.911126   42710 cache_images.go:495] Saving image to: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966
	I0110 08:29:58.911291   42710 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/images/echo-server_functional-822966
	I0110 08:29:58.919620   42710 containerd.go:308] Saving image ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966: /var/lib/minikube/images/echo-server_functional-822966
	I0110 08:29:58.919686   42710 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images export /var/lib/minikube/images/echo-server_functional-822966 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
	I0110 08:29:58.973233   42710 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/images/echo-server_functional-822966
	I0110 08:29:58.976851   42710 ssh_runner.go:448] scp /var/lib/minikube/images/echo-server_functional-822966 --> /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 (2099712 bytes)
	I0110 08:29:59.011900   42710 cache_images.go:527] Transferred and saved /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 to cache
	I0110 08:29:59.011938   42710 cache_images.go:367] Successfully saved all cached images
	I0110 08:29:59.011958   42710 cache_images.go:353] duration metric: took 254.951156ms to SaveCachedImages
	I0110 08:29:59.011974   42710 cache_images.go:458] succeeded pulling from : functional-822966
	I0110 08:29:59.011978   42710 cache_images.go:459] failed pulling from : 
	I0110 08:29:59.012184   42710 image.go:285] retrying tarball read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (1/3)
	I0110 08:29:59.212615   42710 image.go:285] retrying tarball read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (2/3)
	I0110 08:29:59.413166   42710 image.go:285] retrying tarball read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (3/3)
	I0110 08:29:59.413277   42710 image.go:307] retrying manifest read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (1/3)
	I0110 08:29:59.613744   42710 image.go:307] retrying manifest read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (2/3)
	I0110 08:29:59.814201   42710 image.go:307] retrying manifest read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (3/3)
	I0110 08:29:59.818015   42710 out.go:203] 
	W0110 08:29:59.820976   42710 out.go:285] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: failed to determine the image tag from tarball, err: unexpected EOF
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: failed to determine the image tag from tarball, err: unexpected EOF
	W0110 08:29:59.821017   42710 out.go:285] * 
	* 
	W0110 08:29:59.822179   42710 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:29:59.825193   42710 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:441: saving image from minikube to daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:29:58.737268   42710 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:29:58.737693   42710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:58.737722   42710 out.go:374] Setting ErrFile to fd 2...
	I0110 08:29:58.737742   42710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:58.738054   42710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:29:58.738700   42710 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:29:58.738763   42710 cache_images.go:404] Save images: ["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966"]
	I0110 08:29:58.738920   42710 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:29:58.739501   42710 cli_runner.go:164] Run: docker container inspect functional-822966 --format={{.State.Status}}
	I0110 08:29:58.756973   42710 cache_images.go:349] SaveCachedImages start: [ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966]
	I0110 08:29:58.757088   42710 ssh_runner.go:195] Run: systemctl --version
	I0110 08:29:58.757162   42710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-822966
	I0110 08:29:58.777746   42710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/functional-822966/id_rsa Username:docker}
	I0110 08:29:58.881989   42710 containerd.go:268] Checking existence of image with name "ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966" and sha ""
	I0110 08:29:58.882129   42710 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
	I0110 08:29:58.911126   42710 cache_images.go:495] Saving image to: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966
	I0110 08:29:58.911291   42710 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/images/echo-server_functional-822966
	I0110 08:29:58.919620   42710 containerd.go:308] Saving image ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966: /var/lib/minikube/images/echo-server_functional-822966
	I0110 08:29:58.919686   42710 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images export /var/lib/minikube/images/echo-server_functional-822966 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
	I0110 08:29:58.973233   42710 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/images/echo-server_functional-822966
	I0110 08:29:58.976851   42710 ssh_runner.go:448] scp /var/lib/minikube/images/echo-server_functional-822966 --> /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 (2099712 bytes)
	I0110 08:29:59.011900   42710 cache_images.go:527] Transferred and saved /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 to cache
	I0110 08:29:59.011938   42710 cache_images.go:367] Successfully saved all cached images
	I0110 08:29:59.011958   42710 cache_images.go:353] duration metric: took 254.951156ms to SaveCachedImages
	I0110 08:29:59.011974   42710 cache_images.go:458] succeeded pulling from : functional-822966
	I0110 08:29:59.011978   42710 cache_images.go:459] failed pulling from : 
	I0110 08:29:59.012184   42710 image.go:285] retrying tarball read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (1/3)
	I0110 08:29:59.212615   42710 image.go:285] retrying tarball read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (2/3)
	I0110 08:29:59.413166   42710 image.go:285] retrying tarball read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (3/3)
	I0110 08:29:59.413277   42710 image.go:307] retrying manifest read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (1/3)
	I0110 08:29:59.613744   42710 image.go:307] retrying manifest read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (2/3)
	I0110 08:29:59.814201   42710 image.go:307] retrying manifest read for /home/jenkins/minikube-integration/22427-2439/.minikube/cache/images/arm64/ghcr.io/medyagh/image-mirrors/kicbase/echo-server_functional-822966 due to EOF (3/3)
	I0110 08:29:59.818015   42710 out.go:203] 
	W0110 08:29:59.820976   42710 out.go:285] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: failed to determine the image tag from tarball, err: unexpected EOF
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: failed to determine the image tag from tarball, err: unexpected EOF
	W0110 08:29:59.821017   42710 out.go:285] * 
	* 
	W0110 08:29:59.822179   42710 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:29:59.825193   42710 out.go:203] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                    

Test pass (304/337)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.65
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.34
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.34
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 140.3
29 TestAddons/serial/Volcano 40.48
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.93
35 TestAddons/parallel/Registry 15.72
36 TestAddons/parallel/RegistryCreds 0.78
37 TestAddons/parallel/Ingress 18.54
38 TestAddons/parallel/InspektorGadget 10.77
39 TestAddons/parallel/MetricsServer 7.13
41 TestAddons/parallel/CSI 44.47
42 TestAddons/parallel/Headlamp 16.8
43 TestAddons/parallel/CloudSpanner 5.68
44 TestAddons/parallel/LocalPath 51.45
45 TestAddons/parallel/NvidiaDevicePlugin 6.66
46 TestAddons/parallel/Yakd 11.86
48 TestAddons/StoppedEnableDisable 12.41
49 TestCertOptions 29.6
50 TestCertExpiration 216.09
54 TestDockerEnvContainerd 40.68
58 TestErrorSpam/setup 27.75
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.22
61 TestErrorSpam/pause 1.85
62 TestErrorSpam/unpause 1.78
63 TestErrorSpam/stop 1.62
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 45.39
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.34
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.18
75 TestFunctional/serial/CacheCmd/cache/add_local 1.22
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 48.62
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 4.83
89 TestFunctional/parallel/ConfigCmd 0.53
90 TestFunctional/parallel/DashboardCmd 7.53
91 TestFunctional/parallel/DryRun 0.59
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.12
97 TestFunctional/parallel/ServiceCmdConnect 9.6
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 22.97
101 TestFunctional/parallel/SSHCmd 0.73
102 TestFunctional/parallel/CpCmd 2.38
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.24
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.42
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
127 TestFunctional/parallel/ProfileCmd/profile_list 0.47
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/MountCmd/any-port 8.53
130 TestFunctional/parallel/ServiceCmd/List 0.57
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
133 TestFunctional/parallel/ServiceCmd/Format 0.43
134 TestFunctional/parallel/ServiceCmd/URL 0.41
135 TestFunctional/parallel/MountCmd/specific-port 2.25
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.67
137 TestFunctional/parallel/Version/short 0.09
138 TestFunctional/parallel/Version/components 1.3
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.63
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.39
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.38
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.67
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.14
144 TestFunctional/parallel/ImageCommands/Setup 0.64
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 141.29
163 TestMultiControlPlane/serial/DeployApp 6.82
164 TestMultiControlPlane/serial/PingHostFromPods 1.59
165 TestMultiControlPlane/serial/AddWorkerNode 29.31
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 20.03
169 TestMultiControlPlane/serial/StopSecondaryNode 12.92
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 12.67
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.97
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.06
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
176 TestMultiControlPlane/serial/StopCluster 36.19
177 TestMultiControlPlane/serial/RestartCluster 65.55
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 61.04
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
185 TestJSONOutput/start/Command 45.56
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.72
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.63
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.95
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 33.12
211 TestKicCustomNetwork/use_default_bridge_network 29.92
212 TestKicExistingNetwork 32.06
213 TestKicCustomSubnet 29.42
214 TestKicStaticIP 31.33
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 59.83
219 TestMountStart/serial/StartWithMountFirst 8.44
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 8.15
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 7.83
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 73.85
231 TestMultiNode/serial/DeployApp2Nodes 6.12
232 TestMultiNode/serial/PingHostFrom2Pods 0.96
233 TestMultiNode/serial/AddNode 27.82
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.47
237 TestMultiNode/serial/StopNode 2.42
238 TestMultiNode/serial/StartAfterStop 7.9
239 TestMultiNode/serial/RestartKeepsNodes 80.56
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.11
242 TestMultiNode/serial/RestartMultiNode 50.8
243 TestMultiNode/serial/ValidateNameConflict 30.68
250 TestScheduledStopUnix 101.81
253 TestInsufficientStorage 10.05
254 TestRunningBinaryUpgrade 327.83
256 TestKubernetesUpgrade 91.82
257 TestMissingContainerUpgrade 134.62
259 TestPause/serial/Start 53.05
260 TestPause/serial/SecondStartNoReconfiguration 8.79
261 TestPause/serial/Pause 0.88
262 TestPause/serial/VerifyStatus 0.44
263 TestPause/serial/Unpause 0.99
264 TestPause/serial/PauseAgain 1.27
265 TestPause/serial/DeletePaused 3.42
266 TestPause/serial/VerifyDeletedResources 0.16
267 TestStoppedBinaryUpgrade/Setup 0.98
268 TestStoppedBinaryUpgrade/Upgrade 315.52
269 TestStoppedBinaryUpgrade/MinikubeLogs 2.12
277 TestPreload/Start-NoPreload-PullImage 63.61
278 TestPreload/Restart-With-Preload-Check-User-Image 52.75
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
282 TestNoKubernetes/serial/StartWithK8s 28.79
283 TestNoKubernetes/serial/StartWithStopK8s 23.01
284 TestNoKubernetes/serial/Start 7.85
285 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
287 TestNoKubernetes/serial/ProfileList 1.05
288 TestNoKubernetes/serial/Stop 1.32
289 TestNoKubernetes/serial/StartNoArgs 6.85
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
298 TestNetworkPlugins/group/false 3.62
303 TestStartStop/group/old-k8s-version/serial/FirstStart 59.26
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.26
306 TestStartStop/group/old-k8s-version/serial/Stop 12.1
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
308 TestStartStop/group/old-k8s-version/serial/SecondStart 50.51
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
312 TestStartStop/group/old-k8s-version/serial/Pause 3.08
314 TestStartStop/group/no-preload/serial/FirstStart 53.13
315 TestStartStop/group/no-preload/serial/DeployApp 9.34
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
317 TestStartStop/group/no-preload/serial/Stop 12.1
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
319 TestStartStop/group/no-preload/serial/SecondStart 50.2
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
323 TestStartStop/group/no-preload/serial/Pause 3.07
325 TestStartStop/group/embed-certs/serial/FirstStart 45.82
326 TestStartStop/group/embed-certs/serial/DeployApp 10.38
327 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
328 TestStartStop/group/embed-certs/serial/Stop 12.61
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.31
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
332 TestStartStop/group/embed-certs/serial/SecondStart 55.16
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.58
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.21
336 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.27
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
341 TestStartStop/group/embed-certs/serial/Pause 3.83
343 TestStartStop/group/newest-cni/serial/FirstStart 35.1
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.64
346 TestStartStop/group/newest-cni/serial/Stop 1.57
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
348 TestStartStop/group/newest-cni/serial/SecondStart 15.8
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
350 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
354 TestStartStop/group/newest-cni/serial/Pause 3.86
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.03
357 TestPreload/PreloadSrc/gcs 4.69
358 TestNetworkPlugins/group/auto/Start 52.1
359 TestPreload/PreloadSrc/github 4.76
360 TestPreload/PreloadSrc/gcs-cached 1.01
361 TestNetworkPlugins/group/kindnet/Start 51.2
362 TestNetworkPlugins/group/auto/KubeletFlags 0.33
363 TestNetworkPlugins/group/auto/NetCatPod 10.3
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/auto/DNS 0.2
366 TestNetworkPlugins/group/auto/Localhost 0.16
367 TestNetworkPlugins/group/auto/HairPin 0.15
368 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
369 TestNetworkPlugins/group/kindnet/NetCatPod 9.28
370 TestNetworkPlugins/group/kindnet/DNS 0.28
371 TestNetworkPlugins/group/kindnet/Localhost 0.22
372 TestNetworkPlugins/group/kindnet/HairPin 0.23
373 TestNetworkPlugins/group/calico/Start 77.87
374 TestNetworkPlugins/group/custom-flannel/Start 54.65
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
377 TestNetworkPlugins/group/calico/ControllerPod 6.01
378 TestNetworkPlugins/group/custom-flannel/DNS 0.18
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
381 TestNetworkPlugins/group/calico/KubeletFlags 0.34
382 TestNetworkPlugins/group/calico/NetCatPod 11.36
383 TestNetworkPlugins/group/calico/DNS 0.27
384 TestNetworkPlugins/group/calico/Localhost 0.24
385 TestNetworkPlugins/group/calico/HairPin 0.28
386 TestNetworkPlugins/group/enable-default-cni/Start 84.52
387 TestNetworkPlugins/group/flannel/Start 54.15
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
390 TestNetworkPlugins/group/flannel/NetCatPod 10.26
391 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
392 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.32
393 TestNetworkPlugins/group/flannel/DNS 0.19
394 TestNetworkPlugins/group/flannel/Localhost 0.19
395 TestNetworkPlugins/group/flannel/HairPin 0.15
396 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
397 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
398 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
399 TestNetworkPlugins/group/bridge/Start 46.15
400 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
401 TestNetworkPlugins/group/bridge/NetCatPod 9.28
402 TestNetworkPlugins/group/bridge/DNS 0.17
403 TestNetworkPlugins/group/bridge/Localhost 0.14
404 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (7.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-517243 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-517243 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.65210905s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0110 08:20:26.508061    4257 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0110 08:20:26.508140    4257 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-517243
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-517243: exit status 85 (93.805897ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-517243 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-517243 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:20:18
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:20:18.901418    4263 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:20:18.901613    4263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:18.901639    4263 out.go:374] Setting ErrFile to fd 2...
	I0110 08:20:18.901659    4263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:18.901968    4263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	W0110 08:20:18.902141    4263 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22427-2439/.minikube/config/config.json: open /home/jenkins/minikube-integration/22427-2439/.minikube/config/config.json: no such file or directory
	I0110 08:20:18.902624    4263 out.go:368] Setting JSON to true
	I0110 08:20:18.903449    4263 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":172,"bootTime":1768033047,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 08:20:18.903537    4263 start.go:143] virtualization:  
	I0110 08:20:18.909296    4263 out.go:99] [download-only-517243] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0110 08:20:18.909488    4263 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball: no such file or directory
	I0110 08:20:18.909613    4263 notify.go:221] Checking for updates...
	I0110 08:20:18.913323    4263 out.go:171] MINIKUBE_LOCATION=22427
	I0110 08:20:18.916525    4263 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:20:18.919613    4263 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 08:20:18.922709    4263 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 08:20:18.925789    4263 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 08:20:18.931843    4263 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 08:20:18.932136    4263 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:20:18.966204    4263 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:20:18.966336    4263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:19.373045    4263 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 08:20:19.363772089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:20:19.373148    4263 docker.go:319] overlay module found
	I0110 08:20:19.376147    4263 out.go:99] Using the docker driver based on user configuration
	I0110 08:20:19.376186    4263 start.go:309] selected driver: docker
	I0110 08:20:19.376193    4263 start.go:928] validating driver "docker" against <nil>
	I0110 08:20:19.376310    4263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:19.432024    4263 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 08:20:19.423184128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:20:19.432174    4263 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:20:19.432448    4263 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 08:20:19.432624    4263 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:20:19.435829    4263 out.go:171] Using Docker driver with root privileges
	I0110 08:20:19.438778    4263 cni.go:84] Creating CNI manager for ""
	I0110 08:20:19.438844    4263 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0110 08:20:19.438862    4263 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:20:19.438941    4263 start.go:353] cluster config:
	{Name:download-only-517243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-517243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:20:19.441930    4263 out.go:99] Starting "download-only-517243" primary control-plane node in "download-only-517243" cluster
	I0110 08:20:19.441958    4263 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0110 08:20:19.444827    4263 out.go:99] Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:20:19.444871    4263 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0110 08:20:19.445035    4263 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:20:19.466504    4263 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 08:20:19.466717    4263 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 08:20:19.466831    4263 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 08:20:19.491820    4263 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0110 08:20:19.491872    4263 cache.go:65] Caching tarball of preloaded images
	I0110 08:20:19.492036    4263 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0110 08:20:19.495489    4263 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0110 08:20:19.495522    4263 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0110 08:20:19.495529    4263 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I0110 08:20:19.570249    4263 preload.go:313] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I0110 08:20:19.570390    4263 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0110 08:20:23.013959    4263 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I0110 08:20:23.014369    4263 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/download-only-517243/config.json ...
	I0110 08:20:23.014405    4263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/download-only-517243/config.json: {Name:mkd2e4995d19b4a133132cf641ef91f80cc8b260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:23.014569    4263 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0110 08:20:23.014757    4263 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22427-2439/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-517243 host does not exist
	  To start a cluster, run: "minikube start -p download-only-517243"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-517243
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-234777 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-234777 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.340425018s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0110 08:20:30.296020    4257 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0110 08:20:30.296056    4257 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-234777
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-234777: exit status 85 (341.211263ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-517243 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-517243 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ delete  │ -p download-only-517243                                                                                                                                                               │ download-only-517243 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ start   │ -o=json --download-only -p download-only-234777 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-234777 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:20:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:20:26.997324    4460 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:20:26.997439    4460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:26.997449    4460 out.go:374] Setting ErrFile to fd 2...
	I0110 08:20:26.997453    4460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:26.997712    4460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:20:26.998113    4460 out.go:368] Setting JSON to true
	I0110 08:20:26.998848    4460 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":180,"bootTime":1768033047,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 08:20:26.998922    4460 start.go:143] virtualization:  
	I0110 08:20:27.002477    4460 out.go:99] [download-only-234777] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 08:20:27.002797    4460 notify.go:221] Checking for updates...
	I0110 08:20:27.007083    4460 out.go:171] MINIKUBE_LOCATION=22427
	I0110 08:20:27.010587    4460 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:20:27.013715    4460 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 08:20:27.016746    4460 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 08:20:27.019879    4460 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 08:20:27.025803    4460 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 08:20:27.026098    4460 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:20:27.046253    4460 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:20:27.046353    4460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:27.116470    4460 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-10 08:20:27.107086396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:20:27.116576    4460 docker.go:319] overlay module found
	I0110 08:20:27.119653    4460 out.go:99] Using the docker driver based on user configuration
	I0110 08:20:27.119699    4460 start.go:309] selected driver: docker
	I0110 08:20:27.119707    4460 start.go:928] validating driver "docker" against <nil>
	I0110 08:20:27.119802    4460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:27.177418    4460 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-10 08:20:27.167995976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:20:27.177585    4460 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:20:27.177872    4460 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 08:20:27.178017    4460 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:20:27.181081    4460 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-234777 host does not exist
	  To start a cluster, run: "minikube start -p download-only-234777"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-234777
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0110 08:20:31.705007    4257 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-582796 --alsologtostderr --binary-mirror http://127.0.0.1:43821 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-582796" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-582796
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-574801
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-574801: exit status 85 (78.347787ms)

                                                
                                                
-- stdout --
	* Profile "addons-574801" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-574801"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-574801
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-574801: exit status 85 (77.457599ms)

                                                
                                                
-- stdout --
	* Profile "addons-574801" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-574801"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (140.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-574801 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-574801 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m20.300972916s)
--- PASS: TestAddons/Setup (140.30s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.48s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 47.751687ms
addons_test.go:878: volcano-admission stabilized in 48.548298ms
addons_test.go:886: volcano-controller stabilized in 48.590242ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-6wf2t" [c2c10098-9b5f-402f-a305-49fa8ab8ed83] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003211435s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-wm6jw" [eeb5306d-d432-4625-b0af-0c004279e008] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003246935s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-ddj5x" [3c9db9ed-68c1-47f8-93f7-63b77f8314b1] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004020756s
addons_test.go:905: (dbg) Run:  kubectl --context addons-574801 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-574801 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-574801 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [2f6763c4-7550-422a-a4a1-f12183302e21] Pending
helpers_test.go:353: "test-job-nginx-0" [2f6763c4-7550-422a-a4a1-f12183302e21] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [2f6763c4-7550-422a-a4a1-f12183302e21] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003419896s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-574801 addons disable volcano --alsologtostderr -v=1: (11.749487609s)
--- PASS: TestAddons/serial/Volcano (40.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-574801 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-574801 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.93s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-574801 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-574801 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f6458bae-2839-4a49-98e5-f331ffaddad0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f6458bae-2839-4a49-98e5-f331ffaddad0] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004046864s
addons_test.go:696: (dbg) Run:  kubectl --context addons-574801 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-574801 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-574801 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-574801 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.93s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.485671ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-2mrnk" [a31259ee-e69d-4014-8221-4105f5b01ffb] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004553556s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-prqh7" [b0abe19e-4fa4-44fc-9ac5-4d4a1aa55ce3] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003451407s
addons_test.go:394: (dbg) Run:  kubectl --context addons-574801 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-574801 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-574801 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.695182757s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 ip
2026/01/10 08:24:07 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.72s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.652491ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-574801
addons_test.go:334: (dbg) Run:  kubectl --context addons-574801 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-574801 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-574801 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-574801 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [b1d4957d-d21e-42c8-9c6e-94d34791bcf9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [b1d4957d-d21e-42c8-9c6e-94d34791bcf9] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.0032887s
I0110 08:25:24.813665    4257 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-574801 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-574801 addons disable ingress-dns --alsologtostderr -v=1: (1.821159759s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-574801 addons disable ingress --alsologtostderr -v=1: (7.837361494s)
--- PASS: TestAddons/parallel/Ingress (18.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-bwhvm" [f3f11170-5917-4ed7-8c62-7148a9a5f627] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004650087s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-574801 addons disable inspektor-gadget --alsologtostderr -v=1: (5.759720002s)
--- PASS: TestAddons/parallel/InspektorGadget (10.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.13s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 5.131101ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-dl6qf" [4b3825ff-5d50-49a7-9944-cdcd84fd77dd] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0090406s
addons_test.go:465: (dbg) Run:  kubectl --context addons-574801 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0110 08:24:04.357143    4257 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0110 08:24:04.361109    4257 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0110 08:24:04.361135    4257 kapi.go:107] duration metric: took 7.024951ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.035831ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-574801 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-574801 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [96bd49f6-5541-4b04-bdf2-dcd80972b17c] Pending
helpers_test.go:353: "task-pv-pod" [96bd49f6-5541-4b04-bdf2-dcd80972b17c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [96bd49f6-5541-4b04-bdf2-dcd80972b17c] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004034923s
addons_test.go:574: (dbg) Run:  kubectl --context addons-574801 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-574801 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-574801 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-574801 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-574801 delete pod task-pv-pod: (1.297534948s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-574801 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-574801 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-574801 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [a8ca2a13-9043-4030-9c1f-0d0d1f2f553e] Pending
helpers_test.go:353: "task-pv-pod-restore" [a8ca2a13-9043-4030-9c1f-0d0d1f2f553e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [a8ca2a13-9043-4030-9c1f-0d0d1f2f553e] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004150254s
addons_test.go:616: (dbg) Run:  kubectl --context addons-574801 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-574801 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-574801 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-574801 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.874045459s)
--- PASS: TestAddons/parallel/CSI (44.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-574801 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-sgdm2" [b263cefc-a291-4c7d-87e0-b42edc1a5bad] Pending
helpers_test.go:353: "headlamp-6d8d595f-sgdm2" [b263cefc-a291-4c7d-87e0-b42edc1a5bad] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-sgdm2" [b263cefc-a291-4c7d-87e0-b42edc1a5bad] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003408424s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-574801 addons disable headlamp --alsologtostderr -v=1: (5.811272439s)
--- PASS: TestAddons/parallel/Headlamp (16.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-lbrdj" [901a9d31-788e-435a-965d-59bc58dcdff6] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005276271s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-574801 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-574801 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-574801 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [3bdde860-2c09-4621-9574-0c53ff22566d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [3bdde860-2c09-4621-9574-0c53ff22566d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [3bdde860-2c09-4621-9574-0c53ff22566d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003700106s
addons_test.go:969: (dbg) Run:  kubectl --context addons-574801 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 ssh "cat /opt/local-path-provisioner/pvc-dd85a26b-f228-4ea3-8cd5-5f3b9383449a_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-574801 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-574801 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-574801 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.192938898s)
--- PASS: TestAddons/parallel/LocalPath (51.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-rq4gh" [42091339-0c03-47d5-a770-a781f82bf2d4] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003131911s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-vtpfd" [acaa5a87-a28c-4c42-bc54-24eeb2f2f210] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004144136s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-574801 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-574801 addons disable yakd --alsologtostderr -v=1: (5.85398662s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-574801
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-574801: (12.111714266s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-574801
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-574801
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-574801
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (29.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-050298 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0110 09:07:52.859505    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-050298 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (26.780953369s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-050298 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-050298 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-050298 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-050298" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-050298
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-050298: (2.073076065s)
--- PASS: TestCertOptions (29.60s)

                                                
                                    
x
+
TestCertExpiration (216.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-223749 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-223749 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (27.122071917s)
E0110 09:02:52.862066    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:04:14.206955    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-223749 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-223749 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.612541067s)
helpers_test.go:176: Cleaning up "cert-expiration-223749" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-223749
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-223749: (2.352095354s)
--- PASS: TestCertExpiration (216.09s)

                                                
                                    
x
+
TestDockerEnvContainerd (40.68s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-148038 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-148038 --driver=docker  --container-runtime=containerd: (25.287164495s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-148038"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-148038": (1.14192621s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sDlTvBoGESsX/agent.24712" SSH_AGENT_PID="24713" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sDlTvBoGESsX/agent.24712" SSH_AGENT_PID="24713" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sDlTvBoGESsX/agent.24712" SSH_AGENT_PID="24713" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.381985008s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-sDlTvBoGESsX/agent.24712" SSH_AGENT_PID="24713" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-148038" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-148038
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-148038: (2.018637579s)
--- PASS: TestDockerEnvContainerd (40.68s)

                                                
                                    
x
+
TestErrorSpam/setup (27.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-505074 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-505074 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-505074 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-505074 --driver=docker  --container-runtime=containerd: (27.753670136s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (27.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 status
--- PASS: TestErrorSpam/status (1.22s)

                                                
                                    
x
+
TestErrorSpam/pause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 pause
--- PASS: TestErrorSpam/pause (1.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 stop: (1.425382968s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-505074 --log_dir /tmp/nospam-505074 stop
--- PASS: TestErrorSpam/stop (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22427-2439/.minikube/files/etc/test/nested/copy/4257/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-822966 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0110 08:27:52.861178    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:52.867213    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:52.877565    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:52.897904    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:52.938210    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:53.018537    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:53.178961    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:53.499512    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:54.140074    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:55.420317    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:57.981600    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-822966 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (45.389300738s)
--- PASS: TestFunctional/serial/StartWithProxy (45.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0110 08:27:59.940147    4257 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-822966 --alsologtostderr -v=8
E0110 08:28:03.101813    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-822966 --alsologtostderr -v=8: (7.333712193s)
functional_test.go:678: soft start took 7.335215968s for "functional-822966" cluster.
I0110 08:28:07.274152    4257 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (7.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-822966 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 cache add registry.k8s.io/pause:3.1: (1.538938722s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 cache add registry.k8s.io/pause:3.3: (1.367973133s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 cache add registry.k8s.io/pause:latest: (1.269270203s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-822966 /tmp/TestFunctionalserialCacheCmdcacheadd_local3163788772/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cache add minikube-local-cache-test:functional-822966
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cache delete minikube-local-cache-test:functional-822966
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-822966
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh sudo crictl rmi registry.k8s.io/pause:latest
E0110 08:28:13.342712    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.216646ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cache reload
functional_test.go:1178: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 cache reload: (1.059131076s)
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 kubectl -- --context functional-822966 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-822966 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-822966 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0110 08:28:33.823588    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-822966 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.618031497s)
functional_test.go:776: restart took 48.618139487s for "functional-822966" cluster.
I0110 08:29:04.331056    4257 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (48.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-822966 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 logs: (1.489849873s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 logs --file /tmp/TestFunctionalserialLogsFileCmd4257502057/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 logs --file /tmp/TestFunctionalserialLogsFileCmd4257502057/001/logs.txt: (1.507868089s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-822966 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-822966
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-822966: exit status 115 (503.628861ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30104 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-822966 delete -f testdata/invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-822966 delete -f testdata/invalidsvc.yaml: (1.058381692s)
--- PASS: TestFunctional/serial/InvalidService (4.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 config get cpus: exit status 14 (99.800731ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 config get cpus: exit status 14 (82.049737ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-822966 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-822966 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 40063: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-822966 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-822966 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (243.898935ms)

                                                
                                                
-- stdout --
	* [functional-822966] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:29:43.605317   39712 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:29:43.605498   39712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:43.605528   39712 out.go:374] Setting ErrFile to fd 2...
	I0110 08:29:43.605549   39712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:43.605824   39712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:29:43.606189   39712 out.go:368] Setting JSON to false
	I0110 08:29:43.607131   39712 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":737,"bootTime":1768033047,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 08:29:43.607223   39712 start.go:143] virtualization:  
	I0110 08:29:43.610562   39712 out.go:179] * [functional-822966] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 08:29:43.613630   39712 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:29:43.613698   39712 notify.go:221] Checking for updates...
	I0110 08:29:43.619730   39712 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:29:43.622750   39712 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 08:29:43.625790   39712 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 08:29:43.628704   39712 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 08:29:43.631709   39712 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:29:43.635134   39712 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:29:43.635885   39712 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:29:43.675317   39712 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:29:43.675591   39712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:29:43.777949   39712 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 08:29:43.767217856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:29:43.778054   39712 docker.go:319] overlay module found
	I0110 08:29:43.782301   39712 out.go:179] * Using the docker driver based on existing profile
	I0110 08:29:43.785453   39712 start.go:309] selected driver: docker
	I0110 08:29:43.785475   39712 start.go:928] validating driver "docker" against &{Name:functional-822966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-822966 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:29:43.785589   39712 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:29:43.792121   39712 out.go:203] 
	W0110 08:29:43.795092   39712 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0110 08:29:43.798007   39712 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-822966 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-822966 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-822966 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (218.296503ms)

                                                
                                                
-- stdout --
	* [functional-822966] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:29:43.397882   39664 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:29:43.398016   39664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:43.398027   39664 out.go:374] Setting ErrFile to fd 2...
	I0110 08:29:43.398032   39664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:29:43.398479   39664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:29:43.398956   39664 out.go:368] Setting JSON to false
	I0110 08:29:43.400241   39664 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":737,"bootTime":1768033047,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 08:29:43.400319   39664 start.go:143] virtualization:  
	I0110 08:29:43.406070   39664 out.go:179] * [functional-822966] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0110 08:29:43.408986   39664 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:29:43.409048   39664 notify.go:221] Checking for updates...
	I0110 08:29:43.414774   39664 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:29:43.417708   39664 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 08:29:43.420611   39664 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 08:29:43.423631   39664 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 08:29:43.426584   39664 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:29:43.430094   39664 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:29:43.430667   39664 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:29:43.466987   39664 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 08:29:43.467191   39664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:29:43.537982   39664 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 08:29:43.528045991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:29:43.538096   39664 docker.go:319] overlay module found
	I0110 08:29:43.541195   39664 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0110 08:29:43.544014   39664 start.go:309] selected driver: docker
	I0110 08:29:43.544036   39664 start.go:928] validating driver "docker" against &{Name:functional-822966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-822966 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:29:43.544138   39664 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:29:43.547699   39664 out.go:203] 
	W0110 08:29:43.550624   39664 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0110 08:29:43.553559   39664 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-822966 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-822966 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-j9wl8" [31d2a91e-75f5-40cf-bd30-d5c6e9d9bf85] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-j9wl8" [31d2a91e-75f5-40cf-bd30-d5c6e9d9bf85] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.00331454s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31022
functional_test.go:1685: http://192.168.49.2:31022: success! body:
Request served by hello-node-connect-5d95464fd4-j9wl8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31022
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [73b34ae4-6004-4a87-81ab-3866fb13596b] Running
E0110 08:29:14.784604    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00329951s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-822966 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-822966 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-822966 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-822966 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8f882ec9-64dd-4ca2-9d22-c7d60cfa457c] Pending
helpers_test.go:353: "sp-pod" [8f882ec9-64dd-4ca2-9d22-c7d60cfa457c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [8f882ec9-64dd-4ca2-9d22-c7d60cfa457c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003620671s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-822966 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-822966 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-822966 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4c4c3fd0-bb75-460c-afdc-8c2a291ebff1] Pending
helpers_test.go:353: "sp-pod" [4c4c3fd0-bb75-460c-afdc-8c2a291ebff1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003838699s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-822966 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh -n functional-822966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cp functional-822966:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3972550129/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh -n functional-822966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh -n functional-822966 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/4257/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo cat /etc/test/nested/copy/4257/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/4257.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo cat /etc/ssl/certs/4257.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/4257.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo cat /usr/share/ca-certificates/4257.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/42572.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo cat /etc/ssl/certs/42572.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/42572.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo cat /usr/share/ca-certificates/42572.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-822966 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 ssh "sudo systemctl is-active docker": exit status 1 (400.269937ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 ssh "sudo systemctl is-active crio": exit status 1 (313.069519ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-822966 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-822966 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-822966 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-822966 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 37007: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-822966 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-822966 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [ece2f446-8a4a-4e2e-8749-3023d3ecb4f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [ece2f446-8a4a-4e2e-8749-3023d3ecb4f5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003669613s
I0110 08:29:22.623478    4257 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-822966 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.189.100 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-822966 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-822966 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-822966 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-ngr9w" [f28d790a-51dc-49e8-b7b0-1dc5f9ad6573] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-ngr9w" [f28d790a-51dc-49e8-b7b0-1dc5f9ad6573] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003430772s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "407.681146ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "58.510825ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "378.55228ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "56.709826ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdany-port3798253796/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768033779005779002" to /tmp/TestFunctionalparallelMountCmdany-port3798253796/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768033779005779002" to /tmp/TestFunctionalparallelMountCmdany-port3798253796/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768033779005779002" to /tmp/TestFunctionalparallelMountCmdany-port3798253796/001/test-1768033779005779002
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.468356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 08:29:39.375131    4257 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 10 08:29 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 10 08:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 10 08:29 test-1768033779005779002
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh cat /mount-9p/test-1768033779005779002
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-822966 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [9b1141f8-78ac-497f-a9cb-6089a9003521] Pending
helpers_test.go:353: "busybox-mount" [9b1141f8-78ac-497f-a9cb-6089a9003521] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [9b1141f8-78ac-497f-a9cb-6089a9003521] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [9b1141f8-78ac-497f-a9cb-6089a9003521] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00393887s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-822966 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdany-port3798253796/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 service list -o json
functional_test.go:1509: Took "599.216355ms" to run "out/minikube-linux-arm64 -p functional-822966 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:32600
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:32600
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdspecific-port2990112982/001:/mount-9p --alsologtostderr -v=1 --port 36853]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.843499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 08:29:47.947137    4257 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdspecific-port2990112982/001:/mount-9p --alsologtostderr -v=1 --port 36853] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 ssh "sudo umount -f /mount-9p": exit status 1 (352.150694ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-822966 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdspecific-port2990112982/001:/mount-9p --alsologtostderr -v=1 --port 36853] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdVerifyCleanup687452589/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdVerifyCleanup687452589/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdVerifyCleanup687452589/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T" /mount1: exit status 1 (1.064742798s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T" /mount1
2026/01/10 08:29:51 [DEBUG] GET http://127.0.0.1:45395/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-822966 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdVerifyCleanup687452589/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdVerifyCleanup687452589/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-822966 /tmp/TestFunctionalparallelMountCmdVerifyCleanup687452589/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 version -o=json --components: (1.302006202s)
--- PASS: TestFunctional/parallel/Version/components (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-822966 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-822966
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-822966 image ls --format short --alsologtostderr:
I0110 08:29:59.903878   42757 out.go:360] Setting OutFile to fd 1 ...
I0110 08:29:59.904465   42757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:29:59.904498   42757 out.go:374] Setting ErrFile to fd 2...
I0110 08:29:59.904519   42757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:29:59.904811   42757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
I0110 08:29:59.905508   42757 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:29:59.905696   42757 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:29:59.906242   42757 cli_runner.go:164] Run: docker container inspect functional-822966 --format={{.State.Status}}
I0110 08:29:59.929736   42757 ssh_runner.go:195] Run: systemctl --version
I0110 08:29:59.929804   42757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-822966
I0110 08:29:59.953419   42757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/functional-822966/id_rsa Username:docker}
I0110 08:30:00.336980   42757 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-822966 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ sha256:ddc842 │ 15.4MB │
│ docker.io/library/minikube-local-cache-test       │ functional-822966                     │ sha256:5750cb │ 990B   │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ sha256:88898f │ 20.7MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ sha256:de369f │ 22.4MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-822966                     │ sha256:ce2d2c │ 2.17MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ sha256:611c66 │ 25.7MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/pause                             │ latest                                │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ sha256:c3fcf2 │ 24.7MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-822966 image ls --format table --alsologtostderr:
I0110 08:30:00.611825   42837 out.go:360] Setting OutFile to fd 1 ...
I0110 08:30:00.612067   42837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:00.612076   42837 out.go:374] Setting ErrFile to fd 2...
I0110 08:30:00.612082   42837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:00.612392   42837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
I0110 08:30:00.613214   42837 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:30:00.613371   42837 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:30:00.614011   42837 cli_runner.go:164] Run: docker container inspect functional-822966 --format={{.State.Status}}
I0110 08:30:00.649112   42837 ssh_runner.go:195] Run: systemctl --version
I0110 08:30:00.649202   42837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-822966
I0110 08:30:00.681350   42837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/functional-822966/id_rsa Username:docker}
I0110 08:30:00.817499   42837 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-822966 image ls --format json --alsologtostderr:
[{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966"],"size":"2173567"}
,{"id":"sha256:611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"25743422"},{"id":"sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"20672243"},{"id":"sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"15405198"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],
"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:5750cb183c2ebe63cd47d15adcac4f3e94d6434b8c6beee66957612556a07faa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-822966"],"size":"990"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"},{"id":"sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"s
ize":"22432091"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918
dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"24692295"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-822966 image ls --format json --alsologtostderr:
I0110 08:30:00.601683   42832 out.go:360] Setting OutFile to fd 1 ...
I0110 08:30:00.601948   42832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:00.601979   42832 out.go:374] Setting ErrFile to fd 2...
I0110 08:30:00.602001   42832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:00.602426   42832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
I0110 08:30:00.603342   42832 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:30:00.604018   42832 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:30:00.604721   42832 cli_runner.go:164] Run: docker container inspect functional-822966 --format={{.State.Status}}
I0110 08:30:00.628940   42832 ssh_runner.go:195] Run: systemctl --version
I0110 08:30:00.629017   42832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-822966
I0110 08:30:00.671514   42832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/functional-822966/id_rsa Username:docker}
I0110 08:30:00.798844   42832 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-822966 image ls --format yaml --alsologtostderr:
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "24692295"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:5750cb183c2ebe63cd47d15adcac4f3e94d6434b8c6beee66957612556a07faa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-822966
size: "990"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
size: "2173567"
- id: sha256:611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "25743422"
- id: sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "22432091"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "20672243"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "15405198"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-822966 image ls --format yaml --alsologtostderr:
I0110 08:29:59.912093   42758 out.go:360] Setting OutFile to fd 1 ...
I0110 08:29:59.912250   42758 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:29:59.912258   42758 out.go:374] Setting ErrFile to fd 2...
I0110 08:29:59.912271   42758 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:29:59.912589   42758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
I0110 08:29:59.913225   42758 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:29:59.913358   42758 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:29:59.913933   42758 cli_runner.go:164] Run: docker container inspect functional-822966 --format={{.State.Status}}
I0110 08:29:59.947882   42758 ssh_runner.go:195] Run: systemctl --version
I0110 08:29:59.947962   42758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-822966
I0110 08:29:59.974119   42758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/functional-822966/id_rsa Username:docker}
I0110 08:30:00.367213   42758 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-822966 ssh pgrep buildkitd: exit status 1 (383.679696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image build -t localhost/my-image:functional-822966 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 image build -t localhost/my-image:functional-822966 testdata/build --alsologtostderr: (3.519955519s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-822966 image build -t localhost/my-image:functional-822966 testdata/build --alsologtostderr:
I0110 08:30:01.302347   42969 out.go:360] Setting OutFile to fd 1 ...
I0110 08:30:01.302585   42969 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:01.302603   42969 out.go:374] Setting ErrFile to fd 2...
I0110 08:30:01.302610   42969 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:30:01.302994   42969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
I0110 08:30:01.303866   42969 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:30:01.306400   42969 config.go:182] Loaded profile config "functional-822966": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0110 08:30:01.307189   42969 cli_runner.go:164] Run: docker container inspect functional-822966 --format={{.State.Status}}
I0110 08:30:01.325666   42969 ssh_runner.go:195] Run: systemctl --version
I0110 08:30:01.325735   42969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-822966
I0110 08:30:01.360946   42969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/functional-822966/id_rsa Username:docker}
I0110 08:30:01.479633   42969 build_images.go:162] Building image from path: /tmp/build.2143648486.tar
I0110 08:30:01.479712   42969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0110 08:30:01.489702   42969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2143648486.tar
I0110 08:30:01.494820   42969 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2143648486.tar: stat -c "%s %y" /var/lib/minikube/build/build.2143648486.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2143648486.tar': No such file or directory
I0110 08:30:01.494962   42969 ssh_runner.go:362] scp /tmp/build.2143648486.tar --> /var/lib/minikube/build/build.2143648486.tar (3072 bytes)
I0110 08:30:01.519568   42969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2143648486
I0110 08:30:01.530053   42969 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2143648486 -xf /var/lib/minikube/build/build.2143648486.tar
I0110 08:30:01.540741   42969 containerd.go:402] Building image: /var/lib/minikube/build/build.2143648486
I0110 08:30:01.540841   42969 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2143648486 --local dockerfile=/var/lib/minikube/build/build.2143648486 --output type=image,name=localhost/my-image:functional-822966
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:27b1368fce7c08ab1a869cf4beecf9ce126930a6774eda250a4cb33877f9eace 0.0s done
#8 exporting config sha256:c1d40b3ead57afef9191bf5f946fdfd085ed21f10985b7f4fdb3cd7433350399 0.0s done
#8 naming to localhost/my-image:functional-822966 done
#8 DONE 0.2s
I0110 08:30:04.720873   42969 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2143648486 --local dockerfile=/var/lib/minikube/build/build.2143648486 --output type=image,name=localhost/my-image:functional-822966: (3.180001274s)
I0110 08:30:04.720961   42969 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2143648486
I0110 08:30:04.730476   42969 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2143648486.tar
I0110 08:30:04.739630   42969 build_images.go:218] Built localhost/my-image:functional-822966 from /tmp/build.2143648486.tar
I0110 08:30:04.739659   42969 build_images.go:134] succeeded building to: functional-822966
I0110 08:30:04.739664   42969 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 --alsologtostderr: (1.04846054s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-822966 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 --alsologtostderr: (1.029684323s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-822966 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-822966
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-822966
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-822966
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (141.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0110 08:30:36.705091    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m20.378449704s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (141.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 kubectl -- rollout status deployment/busybox: (3.910522346s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-7tnbr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-9wqbg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-zvjnn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-7tnbr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-9wqbg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-zvjnn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-7tnbr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-9wqbg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-zvjnn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-7tnbr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-7tnbr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-9wqbg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-9wqbg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-zvjnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 kubectl -- exec busybox-769dd8b7dd-zvjnn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 node add --alsologtostderr -v 5
E0110 08:32:52.859707    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 node add --alsologtostderr -v 5: (28.246356068s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5: (1.061081506s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-741131 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.076626799s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 status --output json --alsologtostderr -v 5: (1.111833726s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp testdata/cp-test.txt ha-741131:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1823260613/001/cp-test_ha-741131.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131:/home/docker/cp-test.txt ha-741131-m02:/home/docker/cp-test_ha-741131_ha-741131-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m02 "sudo cat /home/docker/cp-test_ha-741131_ha-741131-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131:/home/docker/cp-test.txt ha-741131-m03:/home/docker/cp-test_ha-741131_ha-741131-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m03 "sudo cat /home/docker/cp-test_ha-741131_ha-741131-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131:/home/docker/cp-test.txt ha-741131-m04:/home/docker/cp-test_ha-741131_ha-741131-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m04 "sudo cat /home/docker/cp-test_ha-741131_ha-741131-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp testdata/cp-test.txt ha-741131-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1823260613/001/cp-test_ha-741131-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m02:/home/docker/cp-test.txt ha-741131:/home/docker/cp-test_ha-741131-m02_ha-741131.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131 "sudo cat /home/docker/cp-test_ha-741131-m02_ha-741131.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m02:/home/docker/cp-test.txt ha-741131-m03:/home/docker/cp-test_ha-741131-m02_ha-741131-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m03 "sudo cat /home/docker/cp-test_ha-741131-m02_ha-741131-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m02:/home/docker/cp-test.txt ha-741131-m04:/home/docker/cp-test_ha-741131-m02_ha-741131-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m04 "sudo cat /home/docker/cp-test_ha-741131-m02_ha-741131-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp testdata/cp-test.txt ha-741131-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1823260613/001/cp-test_ha-741131-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m03:/home/docker/cp-test.txt ha-741131:/home/docker/cp-test_ha-741131-m03_ha-741131.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m03 "sudo cat /home/docker/cp-test.txt"
E0110 08:33:20.545975    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131 "sudo cat /home/docker/cp-test_ha-741131-m03_ha-741131.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m03:/home/docker/cp-test.txt ha-741131-m02:/home/docker/cp-test_ha-741131-m03_ha-741131-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m02 "sudo cat /home/docker/cp-test_ha-741131-m03_ha-741131-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m03:/home/docker/cp-test.txt ha-741131-m04:/home/docker/cp-test_ha-741131-m03_ha-741131-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m04 "sudo cat /home/docker/cp-test_ha-741131-m03_ha-741131-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp testdata/cp-test.txt ha-741131-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1823260613/001/cp-test_ha-741131-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m04:/home/docker/cp-test.txt ha-741131:/home/docker/cp-test_ha-741131-m04_ha-741131.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131 "sudo cat /home/docker/cp-test_ha-741131-m04_ha-741131.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m04:/home/docker/cp-test.txt ha-741131-m02:/home/docker/cp-test_ha-741131-m04_ha-741131-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m02 "sudo cat /home/docker/cp-test_ha-741131-m04_ha-741131-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 cp ha-741131-m04:/home/docker/cp-test.txt ha-741131-m03:/home/docker/cp-test_ha-741131-m04_ha-741131-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 ssh -n ha-741131-m03 "sudo cat /home/docker/cp-test_ha-741131-m04_ha-741131-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 node stop m02 --alsologtostderr -v 5: (12.137422018s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5: exit status 7 (784.849537ms)

                                                
                                                
-- stdout --
	ha-741131
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-741131-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-741131-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-741131-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:33:40.201293   59413 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:33:40.201452   59413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:33:40.201470   59413 out.go:374] Setting ErrFile to fd 2...
	I0110 08:33:40.201477   59413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:33:40.202012   59413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:33:40.202459   59413 out.go:368] Setting JSON to false
	I0110 08:33:40.202521   59413 mustload.go:66] Loading cluster: ha-741131
	I0110 08:33:40.203514   59413 config.go:182] Loaded profile config "ha-741131": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:33:40.203574   59413 status.go:174] checking status of ha-741131 ...
	I0110 08:33:40.204430   59413 cli_runner.go:164] Run: docker container inspect ha-741131 --format={{.State.Status}}
	I0110 08:33:40.206603   59413 notify.go:221] Checking for updates...
	I0110 08:33:40.226090   59413 status.go:371] ha-741131 host status = "Running" (err=<nil>)
	I0110 08:33:40.226117   59413 host.go:66] Checking if "ha-741131" exists ...
	I0110 08:33:40.226423   59413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-741131
	I0110 08:33:40.257746   59413 host.go:66] Checking if "ha-741131" exists ...
	I0110 08:33:40.258156   59413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:33:40.258216   59413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-741131
	I0110 08:33:40.277603   59413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/ha-741131/id_rsa Username:docker}
	I0110 08:33:40.385635   59413 ssh_runner.go:195] Run: systemctl --version
	I0110 08:33:40.392409   59413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:33:40.406652   59413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:33:40.475679   59413 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2026-01-10 08:33:40.463889182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:33:40.476226   59413 kubeconfig.go:125] found "ha-741131" server: "https://192.168.49.254:8443"
	I0110 08:33:40.476269   59413 api_server.go:166] Checking apiserver status ...
	I0110 08:33:40.476315   59413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:33:40.490455   59413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	I0110 08:33:40.499864   59413 api_server.go:192] apiserver freezer: "9:freezer:/docker/53d4a77e859eb7f41183f74dce42992a4de542da49a7f92402fd2092ef1643b2/kubepods/burstable/pod6de1e58c6d4cbf04f9d0a97c7cc2eeb3/99ef9a04ee7373c00474a2ff827e7897bd436849962f913b34bd9f07a19f8e1a"
	I0110 08:33:40.499944   59413 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/53d4a77e859eb7f41183f74dce42992a4de542da49a7f92402fd2092ef1643b2/kubepods/burstable/pod6de1e58c6d4cbf04f9d0a97c7cc2eeb3/99ef9a04ee7373c00474a2ff827e7897bd436849962f913b34bd9f07a19f8e1a/freezer.state
	I0110 08:33:40.508135   59413 api_server.go:214] freezer state: "THAWED"
	I0110 08:33:40.508162   59413 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 08:33:40.518181   59413 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 08:33:40.518215   59413 status.go:463] ha-741131 apiserver status = Running (err=<nil>)
	I0110 08:33:40.518234   59413 status.go:176] ha-741131 status: &{Name:ha-741131 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:33:40.518251   59413 status.go:174] checking status of ha-741131-m02 ...
	I0110 08:33:40.518568   59413 cli_runner.go:164] Run: docker container inspect ha-741131-m02 --format={{.State.Status}}
	I0110 08:33:40.535690   59413 status.go:371] ha-741131-m02 host status = "Stopped" (err=<nil>)
	I0110 08:33:40.535718   59413 status.go:384] host is not running, skipping remaining checks
	I0110 08:33:40.535726   59413 status.go:176] ha-741131-m02 status: &{Name:ha-741131-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:33:40.535745   59413 status.go:174] checking status of ha-741131-m03 ...
	I0110 08:33:40.536117   59413 cli_runner.go:164] Run: docker container inspect ha-741131-m03 --format={{.State.Status}}
	I0110 08:33:40.553655   59413 status.go:371] ha-741131-m03 host status = "Running" (err=<nil>)
	I0110 08:33:40.553677   59413 host.go:66] Checking if "ha-741131-m03" exists ...
	I0110 08:33:40.554254   59413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-741131-m03
	I0110 08:33:40.571815   59413 host.go:66] Checking if "ha-741131-m03" exists ...
	I0110 08:33:40.572148   59413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:33:40.572206   59413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-741131-m03
	I0110 08:33:40.589762   59413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/ha-741131-m03/id_rsa Username:docker}
	I0110 08:33:40.693121   59413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:33:40.710447   59413 kubeconfig.go:125] found "ha-741131" server: "https://192.168.49.254:8443"
	I0110 08:33:40.710481   59413 api_server.go:166] Checking apiserver status ...
	I0110 08:33:40.710544   59413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:33:40.724442   59413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1348/cgroup
	I0110 08:33:40.732658   59413 api_server.go:192] apiserver freezer: "9:freezer:/docker/0c27272ef17ef1f256384453953e44336535ff999ed564a9a91499cda32e9a7a/kubepods/burstable/pod635f20e8ef6d8178a1d415fcef3d2eca/701c9bcaf37db8839500283b6c753b612e5348a50a60f59ce9c0100ffd557449"
	I0110 08:33:40.732747   59413 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0c27272ef17ef1f256384453953e44336535ff999ed564a9a91499cda32e9a7a/kubepods/burstable/pod635f20e8ef6d8178a1d415fcef3d2eca/701c9bcaf37db8839500283b6c753b612e5348a50a60f59ce9c0100ffd557449/freezer.state
	I0110 08:33:40.740400   59413 api_server.go:214] freezer state: "THAWED"
	I0110 08:33:40.740430   59413 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 08:33:40.749207   59413 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 08:33:40.749240   59413 status.go:463] ha-741131-m03 apiserver status = Running (err=<nil>)
	I0110 08:33:40.749252   59413 status.go:176] ha-741131-m03 status: &{Name:ha-741131-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:33:40.749305   59413 status.go:174] checking status of ha-741131-m04 ...
	I0110 08:33:40.749640   59413 cli_runner.go:164] Run: docker container inspect ha-741131-m04 --format={{.State.Status}}
	I0110 08:33:40.767019   59413 status.go:371] ha-741131-m04 host status = "Running" (err=<nil>)
	I0110 08:33:40.767043   59413 host.go:66] Checking if "ha-741131-m04" exists ...
	I0110 08:33:40.767435   59413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-741131-m04
	I0110 08:33:40.784548   59413 host.go:66] Checking if "ha-741131-m04" exists ...
	I0110 08:33:40.784875   59413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:33:40.784929   59413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-741131-m04
	I0110 08:33:40.802696   59413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/ha-741131-m04/id_rsa Username:docker}
	I0110 08:33:40.916667   59413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:33:40.929494   59413 status.go:176] ha-741131-m04 status: &{Name:ha-741131-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (12.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 node start m02 --alsologtostderr -v 5: (11.451339996s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5: (1.111568159s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (12.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.088381002s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 stop --alsologtostderr -v 5
E0110 08:34:14.206236    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:14.211618    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:14.221988    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:14.242470    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:14.282867    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:14.363190    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:14.523547    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:14.844089    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:15.484566    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:16.765589    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:19.325880    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:24.446857    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 stop --alsologtostderr -v 5: (37.67717224s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 start --wait true --alsologtostderr -v 5
E0110 08:34:34.687125    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:34:55.167473    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 start --wait true --alsologtostderr -v 5: (59.136341319s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 node delete m03 --alsologtostderr -v 5
E0110 08:35:36.128411    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 node delete m03 --alsologtostderr -v 5: (10.077921075s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 stop --alsologtostderr -v 5: (36.082150038s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5: exit status 7 (104.535916ms)

                                                
                                                
-- stdout --
	ha-741131
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-741131-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-741131-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:36:20.506541   74132 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:36:20.506666   74132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:36:20.506676   74132 out.go:374] Setting ErrFile to fd 2...
	I0110 08:36:20.506681   74132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:36:20.506932   74132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:36:20.507153   74132 out.go:368] Setting JSON to false
	I0110 08:36:20.507195   74132 mustload.go:66] Loading cluster: ha-741131
	I0110 08:36:20.507259   74132 notify.go:221] Checking for updates...
	I0110 08:36:20.508216   74132 config.go:182] Loaded profile config "ha-741131": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:36:20.508247   74132 status.go:174] checking status of ha-741131 ...
	I0110 08:36:20.508816   74132 cli_runner.go:164] Run: docker container inspect ha-741131 --format={{.State.Status}}
	I0110 08:36:20.525482   74132 status.go:371] ha-741131 host status = "Stopped" (err=<nil>)
	I0110 08:36:20.525507   74132 status.go:384] host is not running, skipping remaining checks
	I0110 08:36:20.525514   74132 status.go:176] ha-741131 status: &{Name:ha-741131 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:36:20.525536   74132 status.go:174] checking status of ha-741131-m02 ...
	I0110 08:36:20.525900   74132 cli_runner.go:164] Run: docker container inspect ha-741131-m02 --format={{.State.Status}}
	I0110 08:36:20.548955   74132 status.go:371] ha-741131-m02 host status = "Stopped" (err=<nil>)
	I0110 08:36:20.548980   74132 status.go:384] host is not running, skipping remaining checks
	I0110 08:36:20.548987   74132 status.go:176] ha-741131-m02 status: &{Name:ha-741131-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:36:20.549006   74132 status.go:174] checking status of ha-741131-m04 ...
	I0110 08:36:20.549293   74132 cli_runner.go:164] Run: docker container inspect ha-741131-m04 --format={{.State.Status}}
	I0110 08:36:20.565921   74132 status.go:371] ha-741131-m04 host status = "Stopped" (err=<nil>)
	I0110 08:36:20.565944   74132 status.go:384] host is not running, skipping remaining checks
	I0110 08:36:20.565950   74132 status.go:176] ha-741131-m04 status: &{Name:ha-741131-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (65.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0110 08:36:58.050307    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m4.551519232s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (65.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (61.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 node add --control-plane --alsologtostderr -v 5
E0110 08:37:52.863629    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 node add --control-plane --alsologtostderr -v 5: (59.936546186s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-741131 status --alsologtostderr -v 5: (1.102555861s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (61.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.111509593s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-041841 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E0110 08:39:14.207059    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-041841 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (45.556353462s)
--- PASS: TestJSONOutput/start/Command (45.56s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-041841 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-041841 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-041841 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-041841 --output=json --user=testUser: (5.950895552s)
--- PASS: TestJSONOutput/stop/Command (5.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-594593 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-594593 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.443602ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b45e4042-efc7-4186-9c6c-f377a03cb9c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-594593] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c04291ff-0d73-4475-beb2-cccf4ddb0409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22427"}}
	{"specversion":"1.0","id":"258ec5d9-7fdb-4f3e-92be-d3eaedd4546f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fc628e44-9bce-41f3-bc95-e740a521fb61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig"}}
	{"specversion":"1.0","id":"a666d5b8-c0b4-458a-9271-d6467ff81340","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube"}}
	{"specversion":"1.0","id":"1508f080-e791-4af4-8ac5-6e8ed3317e9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"57790ef9-ca8c-4c3e-a997-1cc71ecafe95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9bf16bfd-f514-4638-9ccb-38aaeeddf948","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-594593" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-594593
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-160007 --network=
E0110 08:39:41.891881    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-160007 --network=: (30.885440762s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-160007" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-160007
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-160007: (2.209651625s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.12s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-520782 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-520782 --network=bridge: (27.776409689s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-520782" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-520782
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-520782: (2.117874527s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.92s)

                                                
                                    
x
+
TestKicExistingNetwork (32.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0110 08:40:39.335866    4257 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 08:40:39.352579    4257 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 08:40:39.352664    4257 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0110 08:40:39.352681    4257 cli_runner.go:164] Run: docker network inspect existing-network
W0110 08:40:39.369056    4257 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0110 08:40:39.369088    4257 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0110 08:40:39.369103    4257 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0110 08:40:39.369206    4257 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 08:40:39.384821    4257 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e01acd8ff726 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:8b:1d:1f:6a:28} reservation:<nil>}
I0110 08:40:39.385146    4257 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40021ff180}
I0110 08:40:39.385786    4257 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0110 08:40:39.385867    4257 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0110 08:40:39.450590    4257 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-612470 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-612470 --network=existing-network: (29.794524286s)
helpers_test.go:176: Cleaning up "existing-network-612470" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-612470
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-612470: (2.119733091s)
I0110 08:41:11.381293    4257 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.06s)

                                                
                                    
x
+
TestKicCustomSubnet (29.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-626345 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-626345 --subnet=192.168.60.0/24: (27.190729472s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-626345 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-626345" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-626345
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-626345: (2.204735928s)
--- PASS: TestKicCustomSubnet (29.42s)

                                                
                                    
x
+
TestKicStaticIP (31.33s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-226465 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-226465 --static-ip=192.168.200.200: (28.975388892s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-226465 ip
helpers_test.go:176: Cleaning up "static-ip-226465" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-226465
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-226465: (2.205257626s)
--- PASS: TestKicStaticIP (31.33s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (59.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-286081 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-286081 --driver=docker  --container-runtime=containerd: (25.850141176s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-288633 --driver=docker  --container-runtime=containerd
E0110 08:42:52.859759    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-288633 --driver=docker  --container-runtime=containerd: (28.0639784s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-286081
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-288633
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-288633" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-288633
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-288633: (2.053584737s)
helpers_test.go:176: Cleaning up "first-286081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-286081
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-286081: (2.386468844s)
--- PASS: TestMinikubeProfile (59.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-029530 --memory=3072 --mount-string /tmp/TestMountStartserial1647089796/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-029530 --memory=3072 --mount-string /tmp/TestMountStartserial1647089796/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.441375149s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-029530 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-031451 --memory=3072 --mount-string /tmp/TestMountStartserial1647089796/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-031451 --memory=3072 --mount-string /tmp/TestMountStartserial1647089796/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.148763699s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-031451 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-029530 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-029530 --alsologtostderr -v=5: (1.707623906s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-031451 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-031451
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-031451: (1.282016627s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-031451
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-031451: (6.831896372s)
--- PASS: TestMountStart/serial/RestartStopped (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-031451 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-466917 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E0110 08:44:14.207259    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:44:15.906267    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-466917 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m13.322790659s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-466917 -- rollout status deployment/busybox: (3.978031256s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-lh5xr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-vjr2j -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-lh5xr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-vjr2j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-lh5xr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-vjr2j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-lh5xr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-lh5xr -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-vjr2j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-466917 -- exec busybox-769dd8b7dd-vjr2j -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-466917 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-466917 -v=5 --alsologtostderr: (27.093237891s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-466917 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp testdata/cp-test.txt multinode-466917:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile724202560/001/cp-test_multinode-466917.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917:/home/docker/cp-test.txt multinode-466917-m02:/home/docker/cp-test_multinode-466917_multinode-466917-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m02 "sudo cat /home/docker/cp-test_multinode-466917_multinode-466917-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917:/home/docker/cp-test.txt multinode-466917-m03:/home/docker/cp-test_multinode-466917_multinode-466917-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m03 "sudo cat /home/docker/cp-test_multinode-466917_multinode-466917-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp testdata/cp-test.txt multinode-466917-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile724202560/001/cp-test_multinode-466917-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917-m02:/home/docker/cp-test.txt multinode-466917:/home/docker/cp-test_multinode-466917-m02_multinode-466917.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917 "sudo cat /home/docker/cp-test_multinode-466917-m02_multinode-466917.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917-m02:/home/docker/cp-test.txt multinode-466917-m03:/home/docker/cp-test_multinode-466917-m02_multinode-466917-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m03 "sudo cat /home/docker/cp-test_multinode-466917-m02_multinode-466917-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp testdata/cp-test.txt multinode-466917-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile724202560/001/cp-test_multinode-466917-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917-m03:/home/docker/cp-test.txt multinode-466917:/home/docker/cp-test_multinode-466917-m03_multinode-466917.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917 "sudo cat /home/docker/cp-test_multinode-466917-m03_multinode-466917.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 cp multinode-466917-m03:/home/docker/cp-test.txt multinode-466917-m02:/home/docker/cp-test_multinode-466917-m03_multinode-466917-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 ssh -n multinode-466917-m02 "sudo cat /home/docker/cp-test_multinode-466917-m03_multinode-466917-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-466917 node stop m03: (1.316784258s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-466917 status: exit status 7 (555.132274ms)

                                                
                                                
-- stdout --
	multinode-466917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-466917-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-466917-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-466917 status --alsologtostderr: exit status 7 (544.9216ms)

                                                
                                                
-- stdout --
	multinode-466917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-466917-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-466917-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:45:44.391559  127648 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:45:44.391668  127648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:45:44.391678  127648 out.go:374] Setting ErrFile to fd 2...
	I0110 08:45:44.391684  127648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:45:44.391946  127648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:45:44.392172  127648 out.go:368] Setting JSON to false
	I0110 08:45:44.392216  127648 mustload.go:66] Loading cluster: multinode-466917
	I0110 08:45:44.392287  127648 notify.go:221] Checking for updates...
	I0110 08:45:44.393519  127648 config.go:182] Loaded profile config "multinode-466917": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:45:44.393551  127648 status.go:174] checking status of multinode-466917 ...
	I0110 08:45:44.394264  127648 cli_runner.go:164] Run: docker container inspect multinode-466917 --format={{.State.Status}}
	I0110 08:45:44.414315  127648 status.go:371] multinode-466917 host status = "Running" (err=<nil>)
	I0110 08:45:44.414341  127648 host.go:66] Checking if "multinode-466917" exists ...
	I0110 08:45:44.414635  127648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-466917
	I0110 08:45:44.437738  127648 host.go:66] Checking if "multinode-466917" exists ...
	I0110 08:45:44.438141  127648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:45:44.438193  127648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-466917
	I0110 08:45:44.459213  127648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/multinode-466917/id_rsa Username:docker}
	I0110 08:45:44.564662  127648 ssh_runner.go:195] Run: systemctl --version
	I0110 08:45:44.571206  127648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:45:44.584005  127648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:45:44.649079  127648 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 08:45:44.639690809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 08:45:44.649843  127648 kubeconfig.go:125] found "multinode-466917" server: "https://192.168.67.2:8443"
	I0110 08:45:44.649890  127648 api_server.go:166] Checking apiserver status ...
	I0110 08:45:44.649957  127648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:45:44.662588  127648 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1328/cgroup
	I0110 08:45:44.671028  127648 api_server.go:192] apiserver freezer: "9:freezer:/docker/2303b5a8e9cde5034783f8f07b9e64037c3ac7bcafc450186f3209a94f4bede5/kubepods/burstable/poda1283a824e898aa66ca00f092f664afb/885913d8d6f00457dc03d0fea29c1393a09bf502f391b3b27cb375f6786adb11"
	I0110 08:45:44.671098  127648 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2303b5a8e9cde5034783f8f07b9e64037c3ac7bcafc450186f3209a94f4bede5/kubepods/burstable/poda1283a824e898aa66ca00f092f664afb/885913d8d6f00457dc03d0fea29c1393a09bf502f391b3b27cb375f6786adb11/freezer.state
	I0110 08:45:44.678671  127648 api_server.go:214] freezer state: "THAWED"
	I0110 08:45:44.678699  127648 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0110 08:45:44.687785  127648 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0110 08:45:44.687814  127648 status.go:463] multinode-466917 apiserver status = Running (err=<nil>)
	I0110 08:45:44.687825  127648 status.go:176] multinode-466917 status: &{Name:multinode-466917 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:45:44.687864  127648 status.go:174] checking status of multinode-466917-m02 ...
	I0110 08:45:44.688213  127648 cli_runner.go:164] Run: docker container inspect multinode-466917-m02 --format={{.State.Status}}
	I0110 08:45:44.708467  127648 status.go:371] multinode-466917-m02 host status = "Running" (err=<nil>)
	I0110 08:45:44.708492  127648 host.go:66] Checking if "multinode-466917-m02" exists ...
	I0110 08:45:44.708814  127648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-466917-m02
	I0110 08:45:44.725709  127648 host.go:66] Checking if "multinode-466917-m02" exists ...
	I0110 08:45:44.726035  127648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:45:44.726081  127648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-466917-m02
	I0110 08:45:44.743749  127648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22427-2439/.minikube/machines/multinode-466917-m02/id_rsa Username:docker}
	I0110 08:45:44.848483  127648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:45:44.861714  127648 status.go:176] multinode-466917-m02 status: &{Name:multinode-466917-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:45:44.861757  127648 status.go:174] checking status of multinode-466917-m03 ...
	I0110 08:45:44.862062  127648 cli_runner.go:164] Run: docker container inspect multinode-466917-m03 --format={{.State.Status}}
	I0110 08:45:44.880020  127648 status.go:371] multinode-466917-m03 host status = "Stopped" (err=<nil>)
	I0110 08:45:44.880042  127648 status.go:384] host is not running, skipping remaining checks
	I0110 08:45:44.880049  127648 status.go:176] multinode-466917-m03 status: &{Name:multinode-466917-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-466917 node start m03 -v=5 --alsologtostderr: (7.079787069s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-466917
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-466917
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-466917: (25.130360541s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-466917 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-466917 --wait=true -v=5 --alsologtostderr: (55.319366061s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-466917
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-466917 node delete m03: (4.988802523s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-466917 stop: (23.919100119s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-466917 status: exit status 7 (100.131869ms)

                                                
                                                
-- stdout --
	multinode-466917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-466917-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-466917 status --alsologtostderr: exit status 7 (91.403197ms)

                                                
                                                
-- stdout --
	multinode-466917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-466917-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:47:43.093318  136522 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:47:43.093531  136522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:47:43.093558  136522 out.go:374] Setting ErrFile to fd 2...
	I0110 08:47:43.093578  136522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:47:43.093890  136522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:47:43.094137  136522 out.go:368] Setting JSON to false
	I0110 08:47:43.094195  136522 mustload.go:66] Loading cluster: multinode-466917
	I0110 08:47:43.094264  136522 notify.go:221] Checking for updates...
	I0110 08:47:43.095562  136522 config.go:182] Loaded profile config "multinode-466917": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:47:43.095616  136522 status.go:174] checking status of multinode-466917 ...
	I0110 08:47:43.096337  136522 cli_runner.go:164] Run: docker container inspect multinode-466917 --format={{.State.Status}}
	I0110 08:47:43.114890  136522 status.go:371] multinode-466917 host status = "Stopped" (err=<nil>)
	I0110 08:47:43.114917  136522 status.go:384] host is not running, skipping remaining checks
	I0110 08:47:43.114924  136522 status.go:176] multinode-466917 status: &{Name:multinode-466917 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:47:43.114954  136522 status.go:174] checking status of multinode-466917-m02 ...
	I0110 08:47:43.115282  136522 cli_runner.go:164] Run: docker container inspect multinode-466917-m02 --format={{.State.Status}}
	I0110 08:47:43.134596  136522 status.go:371] multinode-466917-m02 host status = "Stopped" (err=<nil>)
	I0110 08:47:43.134620  136522 status.go:384] host is not running, skipping remaining checks
	I0110 08:47:43.134628  136522 status.go:176] multinode-466917-m02 status: &{Name:multinode-466917-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-466917 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E0110 08:47:52.860137    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-466917 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.081120391s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-466917 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.80s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-466917
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-466917-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-466917-m02 --driver=docker  --container-runtime=containerd: exit status 14 (100.912417ms)

                                                
                                                
-- stdout --
	* [multinode-466917-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-466917-m02' is duplicated with machine name 'multinode-466917-m02' in profile 'multinode-466917'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-466917-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-466917-m03 --driver=docker  --container-runtime=containerd: (28.124241436s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-466917
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-466917: exit status 80 (341.360341ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-466917 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-466917-m03 already exists in multinode-466917-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-466917-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-466917-m03: (2.057647868s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.68s)

                                                
                                    
x
+
TestScheduledStopUnix (101.81s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-177411 --memory=3072 --driver=docker  --container-runtime=containerd
E0110 08:49:14.207039    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-177411 --memory=3072 --driver=docker  --container-runtime=containerd: (25.706593783s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-177411 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:49:34.628911  146102 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:49:34.629143  146102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:49:34.629171  146102 out.go:374] Setting ErrFile to fd 2...
	I0110 08:49:34.629190  146102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:49:34.629479  146102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:49:34.629854  146102 out.go:368] Setting JSON to false
	I0110 08:49:34.630102  146102 mustload.go:66] Loading cluster: scheduled-stop-177411
	I0110 08:49:34.630501  146102 config.go:182] Loaded profile config "scheduled-stop-177411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:49:34.630617  146102 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/scheduled-stop-177411/config.json ...
	I0110 08:49:34.630828  146102 mustload.go:66] Loading cluster: scheduled-stop-177411
	I0110 08:49:34.630999  146102 config.go:182] Loaded profile config "scheduled-stop-177411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-177411 -n scheduled-stop-177411
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-177411 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:49:35.101154  146191 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:49:35.101392  146191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:49:35.101420  146191 out.go:374] Setting ErrFile to fd 2...
	I0110 08:49:35.101442  146191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:49:35.101789  146191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:49:35.102098  146191 out.go:368] Setting JSON to false
	I0110 08:49:35.102320  146191 daemonize_unix.go:73] killing process 146120 as it is an old scheduled stop
	I0110 08:49:35.102404  146191 mustload.go:66] Loading cluster: scheduled-stop-177411
	I0110 08:49:35.102767  146191 config.go:182] Loaded profile config "scheduled-stop-177411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:49:35.102845  146191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/scheduled-stop-177411/config.json ...
	I0110 08:49:35.103013  146191 mustload.go:66] Loading cluster: scheduled-stop-177411
	I0110 08:49:35.103126  146191 config.go:182] Loaded profile config "scheduled-stop-177411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0110 08:49:35.110628    4257 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/scheduled-stop-177411/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-177411 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-177411 -n scheduled-stop-177411
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-177411
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-177411 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:50:01.164380  146894 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:50:01.164838  146894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:50:01.164878  146894 out.go:374] Setting ErrFile to fd 2...
	I0110 08:50:01.164904  146894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:50:01.165378  146894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 08:50:01.165796  146894 out.go:368] Setting JSON to false
	I0110 08:50:01.166057  146894 mustload.go:66] Loading cluster: scheduled-stop-177411
	I0110 08:50:01.166829  146894 config.go:182] Loaded profile config "scheduled-stop-177411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 08:50:01.167003  146894 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/scheduled-stop-177411/config.json ...
	I0110 08:50:01.167277  146894 mustload.go:66] Loading cluster: scheduled-stop-177411
	I0110 08:50:01.167549  146894 config.go:182] Loaded profile config "scheduled-stop-177411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E0110 08:50:37.254818    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-177411
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-177411: exit status 7 (65.364657ms)

                                                
                                                
-- stdout --
	scheduled-stop-177411
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-177411 -n scheduled-stop-177411
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-177411 -n scheduled-stop-177411: exit status 7 (63.715397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-177411" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-177411
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-177411: (4.376689136s)
--- PASS: TestScheduledStopUnix (101.81s)

                                                
                                    
x
+
TestInsufficientStorage (10.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-630782 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-630782 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.169419211s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"46ff7550-a44f-48fb-b0e1-561662a9e752","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-630782] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d214b34d-ac32-4afd-98eb-b59c30eae6eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22427"}}
	{"specversion":"1.0","id":"298eccdc-9e27-4f13-9182-5b9f19034afe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0632205f-646c-46c5-9a82-f9ef1ac4cfc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig"}}
	{"specversion":"1.0","id":"3376882e-4667-4df7-bb02-ccd7e24fad0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube"}}
	{"specversion":"1.0","id":"330ed51b-a46a-4fe7-a6fb-85894ae9bd01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e442f15a-1ab1-4713-9057-c424d53a9aad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bc2c3baf-d4a2-4537-a4de-250586f9fc68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8455efc4-bcc8-4f68-b1f1-5c1f6564a68a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9dc2ee28-bb07-4e9d-8348-2b090934e36a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b3b2cbb-403a-491d-93e7-88ca95fc7571","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"89591dca-2f5c-4a54-9b2f-30dc77fab645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-630782\" primary control-plane node in \"insufficient-storage-630782\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc1a694f-db66-453b-ad35-baa9f6b1f8ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1767944074-22401 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2022d47f-ba9a-4487-ab0b-7c49541ae6f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9b3cb67-9c52-4022-a7ed-656acfa5d945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-630782 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-630782 --output=json --layout=cluster: exit status 7 (287.849997ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-630782","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-630782","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 08:50:58.122801  148735 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-630782" does not appear in /home/jenkins/minikube-integration/22427-2439/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-630782 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-630782 --output=json --layout=cluster: exit status 7 (300.479159ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-630782","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-630782","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 08:50:58.424418  148801 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-630782" does not appear in /home/jenkins/minikube-integration/22427-2439/kubeconfig
	E0110 08:50:58.434242  148801 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/insufficient-storage-630782/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-630782" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-630782
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-630782: (2.293362155s)
--- PASS: TestInsufficientStorage (10.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (327.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.784613106 start -p running-upgrade-425777 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.784613106 start -p running-upgrade-425777 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (40.421179993s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-425777 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0110 08:57:52.860106    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-425777 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m43.967409162s)
helpers_test.go:176: Cleaning up "running-upgrade-425777" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-425777
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-425777: (2.569134441s)
--- PASS: TestRunningBinaryUpgrade (327.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (91.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-459199 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-459199 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.240411433s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-459199 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-459199 --alsologtostderr: (1.357045204s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-459199 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-459199 status --format={{.Host}}: exit status 7 (66.870258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-459199 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0110 08:52:52.859340    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-459199 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.060789103s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-459199 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-459199 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-459199 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (118.591453ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-459199] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-459199
	    minikube start -p kubernetes-upgrade-459199 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4591992 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-459199 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-459199 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-459199 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (13.270902243s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-459199" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-459199
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-459199: (2.57070603s)
--- PASS: TestKubernetesUpgrade (91.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (134.62s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1789285563 start -p missing-upgrade-475276 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1789285563 start -p missing-upgrade-475276 --memory=3072 --driver=docker  --container-runtime=containerd: (1m5.21194036s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-475276
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-475276: (1.160406664s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-475276
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-475276 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-475276 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.151972202s)
helpers_test.go:176: Cleaning up "missing-upgrade-475276" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-475276
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-475276: (3.008470323s)
--- PASS: TestMissingContainerUpgrade (134.62s)

                                                
                                    
x
+
TestPause/serial/Start (53.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-746389 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-746389 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (53.045657469s)
--- PASS: TestPause/serial/Start (53.05s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-746389 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-746389 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.77063305s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.79s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-746389 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-746389 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-746389 --output=json --layout=cluster: exit status 2 (439.926195ms)

                                                
                                                
-- stdout --
	{"Name":"pause-746389","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-746389","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-746389 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.99s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.27s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-746389 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-746389 --alsologtostderr -v=5: (1.273369668s)
--- PASS: TestPause/serial/PauseAgain (1.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.42s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-746389 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-746389 --alsologtostderr -v=5: (3.417037747s)
--- PASS: TestPause/serial/DeletePaused (3.42s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-746389
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-746389: exit status 1 (16.129316ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-746389: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (315.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4262606965 start -p stopped-upgrade-356152 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4262606965 start -p stopped-upgrade-356152 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (42.398116082s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4262606965 -p stopped-upgrade-356152 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4262606965 -p stopped-upgrade-356152 stop: (1.379375772s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-356152 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0110 08:54:14.207225    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-356152 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m31.743718303s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (315.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-356152
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-356152: (2.12476513s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.12s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (63.61s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-953196 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-953196 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (56.833647388s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-953196 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-953196
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-953196: (5.94330828s)
--- PASS: TestPreload/Start-NoPreload-PullImage (63.61s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (52.75s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-953196 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-953196 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (52.509372119s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-953196 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (52.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-883220 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-883220 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (98.492666ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-883220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-883220 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0110 09:00:55.907310    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-883220 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (28.443664952s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-883220 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-883220 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-883220 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (20.694310607s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-883220 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-883220 status -o json: exit status 2 (314.009393ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-883220","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-883220
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-883220: (1.998152934s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-883220 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-883220 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.851163541s)
--- PASS: TestNoKubernetes/serial/Start (7.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22427-2439/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-883220 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-883220 "sudo systemctl is-active --quiet service kubelet": exit status 1 (330.900012ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-883220
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-883220: (1.324238708s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-883220 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-883220 --driver=docker  --container-runtime=containerd: (6.849170908s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-883220 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-883220 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.696618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-811171 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-811171 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (229.938576ms)

                                                
                                                
-- stdout --
	* [false-811171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:01:49.907976  203374 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:01:49.908088  203374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:01:49.908094  203374 out.go:374] Setting ErrFile to fd 2...
	I0110 09:01:49.908099  203374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:01:49.908375  203374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-2439/.minikube/bin
	I0110 09:01:49.908808  203374 out.go:368] Setting JSON to false
	I0110 09:01:49.909617  203374 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2663,"bootTime":1768033047,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0110 09:01:49.909686  203374 start.go:143] virtualization:  
	I0110 09:01:49.913321  203374 out.go:179] * [false-811171] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:01:49.916419  203374 notify.go:221] Checking for updates...
	I0110 09:01:49.917246  203374 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:01:49.920648  203374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:01:49.923722  203374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-2439/kubeconfig
	I0110 09:01:49.926874  203374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-2439/.minikube
	I0110 09:01:49.930775  203374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:01:49.936581  203374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:01:49.940248  203374 config.go:182] Loaded profile config "force-systemd-env-562333": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0110 09:01:49.940477  203374 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:01:50.000432  203374 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:01:50.000564  203374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:01:50.070732  203374 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:01:50.060911436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:01:50.070838  203374 docker.go:319] overlay module found
	I0110 09:01:50.074036  203374 out.go:179] * Using the docker driver based on user configuration
	I0110 09:01:50.076997  203374 start.go:309] selected driver: docker
	I0110 09:01:50.077027  203374 start.go:928] validating driver "docker" against <nil>
	I0110 09:01:50.077055  203374 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:01:50.081116  203374 out.go:203] 
	W0110 09:01:50.084322  203374 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0110 09:01:50.087260  203374 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-811171 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-811171" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-811171

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811171"

                                                
                                                
----------------------- debugLogs end: false-811171 [took: 3.239997449s] --------------------------------
helpers_test.go:176: Cleaning up "false-811171" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-811171
--- PASS: TestNetworkPlugins/group/false (3.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-072756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-072756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (59.258354361s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-072756 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f63b88f7-6715-4ffa-bbca-fca67a7e589f] Pending
helpers_test.go:353: "busybox" [f63b88f7-6715-4ffa-bbca-fca67a7e589f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f63b88f7-6715-4ffa-bbca-fca67a7e589f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004103085s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-072756 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-072756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0110 09:09:14.207244    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-072756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.12748652s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-072756 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-072756 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-072756 --alsologtostderr -v=3: (12.100642995s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-072756 -n old-k8s-version-072756
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-072756 -n old-k8s-version-072756: exit status 7 (67.235963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-072756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-072756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-072756 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.11309404s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-072756 -n old-k8s-version-072756
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pjf4v" [852367a8-15ca-494b-954f-1d805eb20cd2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003401022s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pjf4v" [852367a8-15ca-494b-954f-1d805eb20cd2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.011038751s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-072756 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-072756 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-072756 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-072756 -n old-k8s-version-072756
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-072756 -n old-k8s-version-072756: exit status 2 (345.132732ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-072756 -n old-k8s-version-072756
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-072756 -n old-k8s-version-072756: exit status 2 (328.813807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-072756 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-072756 -n old-k8s-version-072756
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-072756 -n old-k8s-version-072756
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-765043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-765043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (53.125380716s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-765043 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e51074f6-0fd8-4060-b5eb-d6c4d86cf390] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e51074f6-0fd8-4060-b5eb-d6c4d86cf390] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003632248s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-765043 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-765043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-765043 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-765043 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-765043 --alsologtostderr -v=3: (12.097032919s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-765043 -n no-preload-765043
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-765043 -n no-preload-765043: exit status 7 (83.273025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-765043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-765043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-765043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (49.8309041s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-765043 -n no-preload-765043
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-228xz" [c13ddc53-e29c-4a34-a082-5d6cee7c98fe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003358841s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-228xz" [c13ddc53-e29c-4a34-a082-5d6cee7c98fe] Running
E0110 09:12:52.859798    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003758642s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-765043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-765043 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-765043 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-765043 -n no-preload-765043
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-765043 -n no-preload-765043: exit status 2 (332.614972ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-765043 -n no-preload-765043
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-765043 -n no-preload-765043: exit status 2 (327.63451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-765043 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-765043 -n no-preload-765043
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-765043 -n no-preload-765043
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-070240 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-070240 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (45.820490441s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-070240 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a48c706c-de70-49a9-8db0-64908ee3dd7b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a48c706c-de70-49a9-8db0-64908ee3dd7b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00311075s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-070240 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-070240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-070240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.269864856s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-070240 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-070240 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-070240 --alsologtostderr -v=3: (12.614110483s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-214160 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E0110 09:14:04.863233    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:04.868562    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:04.878993    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:04.899439    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:04.939789    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:05.020155    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:05.180644    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:05.501249    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:06.142244    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:07.422759    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:09.982921    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-214160 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (52.310091108s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-070240 -n embed-certs-070240
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-070240 -n embed-certs-070240: exit status 7 (95.224621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-070240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-070240 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E0110 09:14:14.207268    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:15.105612    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:25.346412    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:14:45.826658    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-070240 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (54.816921812s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-070240 -n embed-certs-070240
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-214160 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [54af4856-73f2-424f-96a8-aa9739c2e2c2] Pending
helpers_test.go:353: "busybox" [54af4856-73f2-424f-96a8-aa9739c2e2c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [54af4856-73f2-424f-96a8-aa9739c2e2c2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003880518s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-214160 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-214160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-214160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.426199318s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-214160 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-214160 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-214160 --alsologtostderr -v=3: (12.209686954s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-xnm88" [d53ac164-efa5-441b-9d9a-c7676c266746] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003755914s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-xnm88" [d53ac164-efa5-441b-9d9a-c7676c266746] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003230903s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-070240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160: exit status 7 (90.517622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-214160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-214160 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-214160 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (55.767238815s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-070240 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-070240 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-070240 -n embed-certs-070240
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-070240 -n embed-certs-070240: exit status 2 (402.590324ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-070240 -n embed-certs-070240
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-070240 -n embed-certs-070240: exit status 2 (402.741237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-070240 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-070240 -n embed-certs-070240
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-070240 -n embed-certs-070240
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-563690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E0110 09:15:26.786908    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-563690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (35.096062103s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-563690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-563690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.639509203s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-563690 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-563690 --alsologtostderr -v=3: (1.567885125s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-563690 -n newest-cni-563690
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-563690 -n newest-cni-563690: exit status 7 (73.540941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-563690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-563690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-563690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (15.442822537s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-563690 -n newest-cni-563690
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-prcvp" [a1575d61-6794-49ac-ae07-19de8a3cd6d1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003294418s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-prcvp" [a1575d61-6794-49ac-ae07-19de8a3cd6d1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004750253s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-214160 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-563690 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-563690 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-563690 -n newest-cni-563690
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-563690 -n newest-cni-563690: exit status 2 (361.984567ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-563690 -n newest-cni-563690
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-563690 -n newest-cni-563690: exit status 2 (345.626296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-563690 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-563690 --alsologtostderr -v=1: (1.060653248s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-563690 -n newest-cni-563690
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-563690 -n newest-cni-563690
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-214160 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-214160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-214160 --alsologtostderr -v=1: (1.459834797s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160: exit status 2 (590.463915ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160: exit status 2 (516.717329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-214160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-214160 -n default-k8s-diff-port-214160
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.03s)
E0110 09:21:57.527474    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.69s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-653898 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-653898 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (4.439909541s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-653898" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-653898
--- PASS: TestPreload/PreloadSrc/gcs (4.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0110 09:16:29.841422    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:29.846659    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:29.857016    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:29.879636    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:29.919919    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:30.000411    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:30.160870    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:30.481554    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.099577287s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.10s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.76s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-038302 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
E0110 09:16:31.121715    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:32.402590    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:34.963778    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-038302 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (4.459550002s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-038302" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-038302
--- PASS: TestPreload/PreloadSrc/github (4.76s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (1.01s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-985645 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-985645" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-985645
--- PASS: TestPreload/PreloadSrc/gcs-cached (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0110 09:16:40.084030    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:48.707404    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:16:50.324643    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:17:10.805386    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (51.19741097s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-811171 "pgrep -a kubelet"
I0110 09:17:21.821492    4257 config.go:182] Loaded profile config "auto-811171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-811171 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7d5f2" [b1b40714-2b9b-4a3c-946b-01c84a47f64b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-7d5f2" [b1b40714-2b9b-4a3c-946b-01c84a47f64b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003607254s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-zs2g9" [106b065b-4688-4727-aa25-5cdc383beeb0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005081382s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-811171 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-811171 "pgrep -a kubelet"
I0110 09:17:34.083144    4257 config.go:182] Loaded profile config "kindnet-811171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-811171 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-qdg6j" [9f8e4829-078e-4f52-9c60-ccfc5a031c73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 09:17:35.907559    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/addons-574801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-qdg6j" [9f8e4829-078e-4f52-9c60-ccfc5a031c73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00411752s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-811171 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.870315161s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.653095631s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-811171 "pgrep -a kubelet"
I0110 09:19:03.449803    4257 config.go:182] Loaded profile config "custom-flannel-811171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-811171 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8s5lh" [f734317f-bf0e-411d-b442-9f3d3e0f0544] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 09:19:04.863940    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/old-k8s-version-072756/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-8s5lh" [f734317f-bf0e-411d-b442-9f3d3e0f0544] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003249466s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-vz87n" [49908192-a03a-47f6-8bf0-09675a981898] Running
E0110 09:19:13.686747    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/no-preload-765043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003652521s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-811171 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0110 09:19:14.207113    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/functional-822966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-811171 "pgrep -a kubelet"
I0110 09:19:19.743172    4257 config.go:182] Loaded profile config "calico-811171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-811171 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zxtrd" [cbb8ee3c-b1a3-4be6-bce3-2b0f21a005bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zxtrd" [cbb8ee3c-b1a3-4be6-bce3-2b0f21a005bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004356312s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-811171 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0110 09:19:50.161278    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:50.166617    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:50.176761    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:50.197385    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:50.237715    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:50.318026    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:50.478387    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:50.798753    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:51.439094    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:19:52.719647    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m24.519360098s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0110 09:20:00.403581    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:20:10.644192    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:20:31.125169    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/default-k8s-diff-port-214160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.145463814s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-ztsrm" [33eb4a41-39fd-4533-b838-94ab9f7e5cab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003717407s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-811171 "pgrep -a kubelet"
I0110 09:20:58.625019    4257 config.go:182] Loaded profile config "flannel-811171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-811171 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zdxpx" [b17c4044-8b06-40df-8b30-fd9a6aa54673] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zdxpx" [b17c4044-8b06-40df-8b30-fd9a6aa54673] Running
I0110 09:21:02.908587    4257 config.go:182] Loaded profile config "enable-default-cni-811171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003102495s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-811171 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-811171 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rrgzs" [839009ef-e159-4a76-aeb5-bbedcbc4c9d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rrgzs" [839009ef-e159-4a76-aeb5-bbedcbc4c9d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003566384s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-811171 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-811171 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-811171 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (46.146110915s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-811171 "pgrep -a kubelet"
I0110 09:22:21.927441    4257 config.go:182] Loaded profile config "bridge-811171": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-811171 replace --force -f testdata/netcat-deployment.yaml
E0110 09:22:22.085528    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:22.090849    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:22.101153    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:22.123486    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:22.163763    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-wzw6r" [cd99ef9f-83e7-46ab-a9a8-54b39f68a060] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 09:22:22.244056    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:22.404588    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:22.725167    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:23.365762    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:24.645970    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-wzw6r" [cd99ef9f-83e7-46ab-a9a8-54b39f68a060] Running
E0110 09:22:27.206492    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/auto-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:27.787749    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:27.793299    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:27.803613    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:27.823928    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:27.864447    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:27.944802    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:28.105243    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:28.426137    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:29.066659    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:22:30.347241    4257 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-2439/.minikube/profiles/kindnet-811171/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003285056s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-811171 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-811171 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/337)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-367124 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-367124" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-367124
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-666235" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-666235
--- SKIP: TestStartStop/group/disable-driver-mounts (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-811171 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-811171" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-811171

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811171"

                                                
                                                
----------------------- debugLogs end: kubenet-811171 [took: 3.257796029s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-811171" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-811171
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-811171 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-811171" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-811171

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-811171" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811171"

                                                
                                                
----------------------- debugLogs end: cilium-811171 [took: 3.629222788s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-811171" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-811171
--- SKIP: TestNetworkPlugins/group/cilium (3.79s)

                                                
                                    
Copied to clipboard