Test Report: Docker_Linux_containerd_arm64 22332

                    
                      56e1ce855180c73f84c0d958e6323d58f60b3065:2025-12-27:43013
                    
                

Test fail (2/337)

Order failed test Duration
52 TestForceSystemdFlag 504.97
53 TestForceSystemdEnv 506.03
x
+
TestForceSystemdFlag (504.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-875839 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1227 20:42:51.829119  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-875839 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m21.056914626s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-875839] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-875839" primary control-plane node in "force-systemd-flag-875839" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:42:44.450614  512816 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:42:44.450748  512816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:42:44.450759  512816 out.go:374] Setting ErrFile to fd 2...
	I1227 20:42:44.450765  512816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:42:44.451046  512816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:42:44.451537  512816 out.go:368] Setting JSON to false
	I1227 20:42:44.452468  512816 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8716,"bootTime":1766859449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 20:42:44.452539  512816 start.go:143] virtualization:  
	I1227 20:42:44.456098  512816 out.go:179] * [force-systemd-flag-875839] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:42:44.460810  512816 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:42:44.460944  512816 notify.go:221] Checking for updates...
	I1227 20:42:44.467481  512816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:42:44.470760  512816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:42:44.474017  512816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 20:42:44.477177  512816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:42:44.480226  512816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:42:44.483786  512816 config.go:182] Loaded profile config "force-systemd-env-857112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:42:44.483901  512816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:42:44.514242  512816 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:42:44.514368  512816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:42:44.600674  512816 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:42:44.590030356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:42:44.600784  512816 docker.go:319] overlay module found
	I1227 20:42:44.603988  512816 out.go:179] * Using the docker driver based on user configuration
	I1227 20:42:44.606895  512816 start.go:309] selected driver: docker
	I1227 20:42:44.606918  512816 start.go:928] validating driver "docker" against <nil>
	I1227 20:42:44.606938  512816 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:42:44.607721  512816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:42:44.660643  512816 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:42:44.65175192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:42:44.660805  512816 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:42:44.661029  512816 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:42:44.663983  512816 out.go:179] * Using Docker driver with root privileges
	I1227 20:42:44.666777  512816 cni.go:84] Creating CNI manager for ""
	I1227 20:42:44.666837  512816 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 20:42:44.666853  512816 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:42:44.666931  512816 start.go:353] cluster config:
	{Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1227 20:42:44.670122  512816 out.go:179] * Starting "force-systemd-flag-875839" primary control-plane node in "force-systemd-flag-875839" cluster
	I1227 20:42:44.673023  512816 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 20:42:44.675977  512816 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:42:44.678899  512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:42:44.678926  512816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:42:44.678947  512816 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 20:42:44.678957  512816 cache.go:65] Caching tarball of preloaded images
	I1227 20:42:44.679037  512816 preload.go:251] Found /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 20:42:44.679046  512816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 20:42:44.679152  512816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json ...
	I1227 20:42:44.679204  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json: {Name:mk226d5712d36dc79e3bc51dc29625caf226ee6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:44.698707  512816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:42:44.698733  512816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:42:44.698766  512816 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:42:44.698799  512816 start.go:360] acquireMachinesLock for force-systemd-flag-875839: {Name:mka1cb79a66dbff1223f12a6e0653c935a407a1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:42:44.698917  512816 start.go:364] duration metric: took 96.443µs to acquireMachinesLock for "force-systemd-flag-875839"
	I1227 20:42:44.698951  512816 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 20:42:44.699019  512816 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:42:44.702439  512816 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:42:44.702717  512816 start.go:159] libmachine.API.Create for "force-systemd-flag-875839" (driver="docker")
	I1227 20:42:44.702756  512816 client.go:173] LocalClient.Create starting
	I1227 20:42:44.702822  512816 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem
	I1227 20:42:44.702861  512816 main.go:144] libmachine: Decoding PEM data...
	I1227 20:42:44.702888  512816 main.go:144] libmachine: Parsing certificate...
	I1227 20:42:44.702941  512816 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem
	I1227 20:42:44.702963  512816 main.go:144] libmachine: Decoding PEM data...
	I1227 20:42:44.702975  512816 main.go:144] libmachine: Parsing certificate...
	I1227 20:42:44.703517  512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:42:44.719292  512816 cli_runner.go:211] docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:42:44.719369  512816 network_create.go:284] running [docker network inspect force-systemd-flag-875839] to gather additional debugging logs...
	I1227 20:42:44.719387  512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839
	W1227 20:42:44.733398  512816 cli_runner.go:211] docker network inspect force-systemd-flag-875839 returned with exit code 1
	I1227 20:42:44.733430  512816 network_create.go:287] error running [docker network inspect force-systemd-flag-875839]: docker network inspect force-systemd-flag-875839: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-875839 not found
	I1227 20:42:44.733442  512816 network_create.go:289] output of [docker network inspect force-systemd-flag-875839]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-875839 not found
	
	** /stderr **
	I1227 20:42:44.733536  512816 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:42:44.750679  512816 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-39a3264d8f81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:08:2a:c8:87:59} reservation:<nil>}
	I1227 20:42:44.751059  512816 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ad751755a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:9d:74:07:ce:ba} reservation:<nil>}
	I1227 20:42:44.751350  512816 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f84ef5e3062f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:ef:60:e2:0e:e4} reservation:<nil>}
	I1227 20:42:44.751800  512816 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a47a80}
	I1227 20:42:44.751824  512816 network_create.go:124] attempt to create docker network force-systemd-flag-875839 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 20:42:44.751879  512816 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-875839 force-systemd-flag-875839
	I1227 20:42:44.817033  512816 network_create.go:108] docker network force-systemd-flag-875839 192.168.76.0/24 created
	I1227 20:42:44.817068  512816 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-875839" container
	I1227 20:42:44.817162  512816 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:42:44.833900  512816 cli_runner.go:164] Run: docker volume create force-systemd-flag-875839 --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:42:44.855305  512816 oci.go:103] Successfully created a docker volume force-systemd-flag-875839
	I1227 20:42:44.855397  512816 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-875839-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --entrypoint /usr/bin/test -v force-systemd-flag-875839:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:42:45.520564  512816 oci.go:107] Successfully prepared a docker volume force-systemd-flag-875839
	I1227 20:42:45.520637  512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:42:45.520651  512816 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:42:45.520724  512816 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-875839:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:42:49.411447  512816 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-875839:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.890669583s)
	I1227 20:42:49.411480  512816 kic.go:203] duration metric: took 3.890825481s to extract preloaded images to volume ...
	W1227 20:42:49.411625  512816 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:42:49.411780  512816 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:42:49.466802  512816 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-875839 --name force-systemd-flag-875839 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-875839 --network force-systemd-flag-875839 --ip 192.168.76.2 --volume force-systemd-flag-875839:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:42:49.764752  512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Running}}
	I1227 20:42:49.794580  512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
	I1227 20:42:49.818888  512816 cli_runner.go:164] Run: docker exec force-systemd-flag-875839 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:42:49.884825  512816 oci.go:144] the created container "force-systemd-flag-875839" has a running status.
	I1227 20:42:49.884858  512816 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa...
	I1227 20:42:50.331141  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 20:42:50.331230  512816 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:42:50.354044  512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
	I1227 20:42:50.377426  512816 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:42:50.377459  512816 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-875839 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:42:50.420652  512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
	I1227 20:42:50.438519  512816 machine.go:94] provisionDockerMachine start ...
	I1227 20:42:50.438612  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:50.456377  512816 main.go:144] libmachine: Using SSH client type: native
	I1227 20:42:50.456728  512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33416 <nil> <nil>}
	I1227 20:42:50.456744  512816 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:42:50.457445  512816 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:42:53.598911  512816 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-875839
	
	I1227 20:42:53.598937  512816 ubuntu.go:182] provisioning hostname "force-systemd-flag-875839"
	I1227 20:42:53.599044  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:53.617338  512816 main.go:144] libmachine: Using SSH client type: native
	I1227 20:42:53.617662  512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33416 <nil> <nil>}
	I1227 20:42:53.617679  512816 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-875839 && echo "force-systemd-flag-875839" | sudo tee /etc/hostname
	I1227 20:42:53.764333  512816 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-875839
	
	I1227 20:42:53.764479  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:53.782984  512816 main.go:144] libmachine: Using SSH client type: native
	I1227 20:42:53.783321  512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33416 <nil> <nil>}
	I1227 20:42:53.783352  512816 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-875839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-875839/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-875839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:42:53.923458  512816 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:42:53.923486  512816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-300670/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-300670/.minikube}
	I1227 20:42:53.923554  512816 ubuntu.go:190] setting up certificates
	I1227 20:42:53.923579  512816 provision.go:84] configureAuth start
	I1227 20:42:53.923657  512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
	I1227 20:42:53.941558  512816 provision.go:143] copyHostCerts
	I1227 20:42:53.941608  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
	I1227 20:42:53.941644  512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem, removing ...
	I1227 20:42:53.941656  512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
	I1227 20:42:53.941740  512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem (1082 bytes)
	I1227 20:42:53.941834  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
	I1227 20:42:53.941860  512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem, removing ...
	I1227 20:42:53.941879  512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
	I1227 20:42:53.941908  512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem (1123 bytes)
	I1227 20:42:53.941966  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
	I1227 20:42:53.941987  512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem, removing ...
	I1227 20:42:53.941997  512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
	I1227 20:42:53.942022  512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem (1679 bytes)
	I1227 20:42:53.942086  512816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-875839 san=[127.0.0.1 192.168.76.2 force-systemd-flag-875839 localhost minikube]
	I1227 20:42:54.202929  512816 provision.go:177] copyRemoteCerts
	I1227 20:42:54.202994  512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:42:54.203044  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.221943  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.321588  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:42:54.321656  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 20:42:54.343016  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:42:54.343080  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 20:42:54.360298  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:42:54.360375  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:42:54.377941  512816 provision.go:87] duration metric: took 454.325341ms to configureAuth
	I1227 20:42:54.377969  512816 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:42:54.378138  512816 config.go:182] Loaded profile config "force-systemd-flag-875839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:42:54.378151  512816 machine.go:97] duration metric: took 3.939607712s to provisionDockerMachine
	I1227 20:42:54.378158  512816 client.go:176] duration metric: took 9.675390037s to LocalClient.Create
	I1227 20:42:54.378178  512816 start.go:167] duration metric: took 9.675461349s to libmachine.API.Create "force-systemd-flag-875839"
	I1227 20:42:54.378187  512816 start.go:293] postStartSetup for "force-systemd-flag-875839" (driver="docker")
	I1227 20:42:54.378196  512816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:42:54.378248  512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:42:54.378289  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.394904  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.495529  512816 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:42:54.498962  512816 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:42:54.498995  512816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:42:54.499008  512816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/addons for local assets ...
	I1227 20:42:54.499064  512816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/files for local assets ...
	I1227 20:42:54.499159  512816 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> 3025412.pem in /etc/ssl/certs
	I1227 20:42:54.499172  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> /etc/ssl/certs/3025412.pem
	I1227 20:42:54.499303  512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:42:54.507013  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /etc/ssl/certs/3025412.pem (1708 bytes)
	I1227 20:42:54.524308  512816 start.go:296] duration metric: took 146.106071ms for postStartSetup
	I1227 20:42:54.524674  512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
	I1227 20:42:54.541545  512816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json ...
	I1227 20:42:54.541820  512816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:42:54.541868  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.558475  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.656227  512816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:42:54.660890  512816 start.go:128] duration metric: took 9.961854464s to createHost
	I1227 20:42:54.660916  512816 start.go:83] releasing machines lock for "force-systemd-flag-875839", held for 9.961983524s
	I1227 20:42:54.661038  512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
	I1227 20:42:54.678050  512816 ssh_runner.go:195] Run: cat /version.json
	I1227 20:42:54.678108  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.678353  512816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:42:54.678414  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.697205  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.698536  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.790825  512816 ssh_runner.go:195] Run: systemctl --version
	I1227 20:42:54.890335  512816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:42:54.894628  512816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:42:54.894703  512816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:42:54.922183  512816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:42:54.922205  512816 start.go:496] detecting cgroup driver to use...
	I1227 20:42:54.922220  512816 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:42:54.922274  512816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 20:42:54.937492  512816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 20:42:54.950607  512816 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:42:54.950719  512816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:42:54.968539  512816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:42:54.987404  512816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:42:55.144395  512816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:42:55.270151  512816 docker.go:234] disabling docker service ...
	I1227 20:42:55.270245  512816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:42:55.293254  512816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:42:55.307641  512816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:42:55.428488  512816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:42:55.544420  512816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:42:55.556970  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:42:55.572425  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 20:42:55.581870  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 20:42:55.591038  512816 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 20:42:55.591152  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 20:42:55.600400  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:42:55.609307  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 20:42:55.618091  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:42:55.627102  512816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:42:55.635238  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 20:42:55.644259  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 20:42:55.653590  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 20:42:55.662844  512816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:42:55.670803  512816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:42:55.678906  512816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:42:55.792508  512816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 20:42:55.925141  512816 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 20:42:55.925261  512816 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 20:42:55.929276  512816 start.go:574] Will wait 60s for crictl version
	I1227 20:42:55.929388  512816 ssh_runner.go:195] Run: which crictl
	I1227 20:42:55.932931  512816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:42:55.957058  512816 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 20:42:55.957181  512816 ssh_runner.go:195] Run: containerd --version
	I1227 20:42:55.979962  512816 ssh_runner.go:195] Run: containerd --version
	I1227 20:42:56.007149  512816 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 20:42:56.010308  512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:42:56.027937  512816 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:42:56.032126  512816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:42:56.043260  512816 kubeadm.go:884] updating cluster {Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:42:56.043408  512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:42:56.043480  512816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:42:56.072941  512816 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 20:42:56.072967  512816 containerd.go:542] Images already preloaded, skipping extraction
	I1227 20:42:56.073040  512816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:42:56.098189  512816 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 20:42:56.098216  512816 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:42:56.098225  512816 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1227 20:42:56.098317  512816 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-875839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:42:56.098386  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 20:42:56.123756  512816 cni.go:84] Creating CNI manager for ""
	I1227 20:42:56.123781  512816 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 20:42:56.123798  512816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:42:56.123827  512816 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-875839 NodeName:force-systemd-flag-875839 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:42:56.123946  512816 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-875839"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:42:56.124019  512816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:42:56.133114  512816 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:42:56.133224  512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:42:56.141401  512816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1227 20:42:56.157153  512816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:42:56.172136  512816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1227 20:42:56.185664  512816 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:42:56.189569  512816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:42:56.200285  512816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:42:56.310897  512816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:42:56.330461  512816 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839 for IP: 192.168.76.2
	I1227 20:42:56.330485  512816 certs.go:195] generating shared ca certs ...
	I1227 20:42:56.330501  512816 certs.go:227] acquiring lock for ca certs: {Name:mkf93c4b7b6f0a265527090e39bdf731f6a1491b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.330640  512816 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key
	I1227 20:42:56.330697  512816 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key
	I1227 20:42:56.330709  512816 certs.go:257] generating profile certs ...
	I1227 20:42:56.330767  512816 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key
	I1227 20:42:56.330784  512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt with IP's: []
	I1227 20:42:56.654113  512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt ...
	I1227 20:42:56.654148  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt: {Name:mk690272e7c9732b7460196a75d46ce521525785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.654393  512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key ...
	I1227 20:42:56.654411  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key: {Name:mkc39b22fbff4b40897d4f98a3d62c6f55391f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.654517  512816 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1
	I1227 20:42:56.654538  512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 20:42:56.834765  512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 ...
	I1227 20:42:56.834804  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1: {Name:mkc9aaa28a12a38cdd436242cc98ebbe1035831f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.834991  512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1 ...
	I1227 20:42:56.835006  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1: {Name:mk3594c59348fecf67f0f33d24079612f39e8847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.835098  512816 certs.go:382] copying /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 -> /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt
	I1227 20:42:56.835196  512816 certs.go:386] copying /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1 -> /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key
	I1227 20:42:56.835265  512816 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key
	I1227 20:42:56.835286  512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt with IP's: []
	I1227 20:42:57.497782  512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt ...
	I1227 20:42:57.497816  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt: {Name:mk6c13ddc40f97cd4770101e7d4b970e00fe21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:57.498023  512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key ...
	I1227 20:42:57.498038  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key: {Name:mk60c7e4a1d2a1da5fcd88dbfb787475edf7630f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:57.498129  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:42:57.498152  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:42:57.498166  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:42:57.498182  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:42:57.498197  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:42:57.498209  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:42:57.498226  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:42:57.498241  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:42:57.498302  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem (1338 bytes)
	W1227 20:42:57.498346  512816 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541_empty.pem, impossibly tiny 0 bytes
	I1227 20:42:57.498360  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:42:57.498388  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem (1082 bytes)
	I1227 20:42:57.498416  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:42:57.498445  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem (1679 bytes)
	I1227 20:42:57.498495  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem (1708 bytes)
	I1227 20:42:57.498530  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.498552  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.498567  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem -> /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.499202  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:42:57.518888  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:42:57.541602  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:42:57.560371  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:42:57.578574  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 20:42:57.596428  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:42:57.614059  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:42:57.631746  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:42:57.649051  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /usr/share/ca-certificates/3025412.pem (1708 bytes)
	I1227 20:42:57.666899  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:42:57.685179  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem --> /usr/share/ca-certificates/302541.pem (1338 bytes)
	I1227 20:42:57.704185  512816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:42:57.717788  512816 ssh_runner.go:195] Run: openssl version
	I1227 20:42:57.724506  512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.732186  512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/302541.pem /etc/ssl/certs/302541.pem
	I1227 20:42:57.739774  512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.743710  512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:01 /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.743773  512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.785241  512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:42:57.792974  512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/302541.pem /etc/ssl/certs/51391683.0
	I1227 20:42:57.800846  512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.810694  512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3025412.pem /etc/ssl/certs/3025412.pem
	I1227 20:42:57.819628  512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.825369  512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:01 /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.825452  512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.867666  512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:42:57.875568  512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3025412.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:42:57.882973  512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.890507  512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:42:57.898264  512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.902159  512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.902227  512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.943302  512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:42:57.950884  512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:42:57.958222  512816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:42:57.961793  512816 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:42:57.961846  512816 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:42:57.961921  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 20:42:57.961986  512816 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:42:57.992506  512816 cri.go:96] found id: ""
	I1227 20:42:57.992583  512816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:42:58.003253  512816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:42:58.011987  512816 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:42:58.012081  512816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:42:58.020896  512816 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:42:58.020916  512816 kubeadm.go:158] found existing configuration files:
	
	I1227 20:42:58.020969  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:42:58.030325  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:42:58.030399  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:42:58.039358  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:42:58.049599  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:42:58.049713  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:42:58.058561  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:42:58.068422  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:42:58.068537  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:42:58.077497  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:42:58.087253  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:42:58.087372  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:42:58.096312  512816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:42:58.136339  512816 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:42:58.136633  512816 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:42:58.210092  512816 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:42:58.210244  512816 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:42:58.210334  512816 kubeadm.go:319] OS: Linux
	I1227 20:42:58.210426  512816 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:42:58.210510  512816 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:42:58.210589  512816 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:42:58.210671  512816 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:42:58.210755  512816 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:42:58.210837  512816 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:42:58.210918  512816 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:42:58.211026  512816 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:42:58.211119  512816 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:42:58.277645  512816 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:42:58.277833  512816 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:42:58.277971  512816 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:42:58.283796  512816 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:42:58.290858  512816 out.go:252]   - Generating certificates and keys ...
	I1227 20:42:58.291030  512816 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:42:58.291136  512816 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:42:58.557075  512816 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:42:58.748413  512816 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:42:58.793614  512816 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:42:59.304343  512816 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:42:59.833617  512816 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:42:59.834012  512816 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:43:00.429800  512816 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:43:00.430239  512816 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:43:00.529822  512816 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:43:01.296650  512816 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:43:01.612939  512816 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:43:01.613240  512816 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:43:01.833117  512816 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:43:02.012700  512816 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:43:02.166458  512816 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:43:02.299475  512816 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:43:02.455123  512816 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:43:02.456053  512816 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:43:02.458808  512816 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:43:02.462661  512816 out.go:252]   - Booting up control plane ...
	I1227 20:43:02.462775  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:43:02.462871  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:43:02.462950  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:43:02.480678  512816 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:43:02.481005  512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:43:02.489220  512816 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:43:02.489577  512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:43:02.489803  512816 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:43:02.667653  512816 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:43:02.667780  512816 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:47:02.664377  512816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000907971s
	I1227 20:47:02.664410  512816 kubeadm.go:319] 
	I1227 20:47:02.664468  512816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:47:02.664510  512816 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:47:02.664624  512816 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:47:02.664635  512816 kubeadm.go:319] 
	I1227 20:47:02.664740  512816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:47:02.664776  512816 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:47:02.664807  512816 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:47:02.664816  512816 kubeadm.go:319] 
	I1227 20:47:02.679580  512816 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:47:02.680004  512816 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:47:02.680112  512816 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:47:02.680529  512816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 20:47:02.680542  512816 kubeadm.go:319] 
	I1227 20:47:02.680639  512816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 20:47:02.680748  512816 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000907971s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000907971s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 20:47:02.680822  512816 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1227 20:47:03.181331  512816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:47:03.200995  512816 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:47:03.201068  512816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:47:03.212061  512816 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:47:03.212087  512816 kubeadm.go:158] found existing configuration files:
	
	I1227 20:47:03.212138  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:47:03.227054  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:47:03.227128  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:47:03.236695  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:47:03.246584  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:47:03.246660  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:47:03.256088  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:47:03.265760  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:47:03.265828  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:47:03.274961  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:47:03.284859  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:47:03.284927  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:47:03.295924  512816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:47:03.349564  512816 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:47:03.349626  512816 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:47:03.462446  512816 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:47:03.462536  512816 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:47:03.462580  512816 kubeadm.go:319] OS: Linux
	I1227 20:47:03.462630  512816 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:47:03.462683  512816 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:47:03.462733  512816 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:47:03.462790  512816 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:47:03.462843  512816 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:47:03.462895  512816 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:47:03.462945  512816 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:47:03.462998  512816 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:47:03.463048  512816 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:47:03.551609  512816 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:47:03.551726  512816 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:47:03.551823  512816 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:47:03.563738  512816 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:47:03.572951  512816 out.go:252]   - Generating certificates and keys ...
	I1227 20:47:03.573047  512816 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:47:03.573121  512816 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:47:03.573205  512816 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 20:47:03.573270  512816 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 20:47:03.573344  512816 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 20:47:03.573402  512816 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 20:47:03.573469  512816 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 20:47:03.573534  512816 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 20:47:03.573612  512816 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 20:47:03.573689  512816 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 20:47:03.573730  512816 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 20:47:03.573790  512816 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:47:03.910169  512816 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:47:04.085625  512816 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:47:04.204209  512816 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:47:04.500994  512816 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:47:04.759578  512816 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:47:04.766015  512816 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:47:04.773293  512816 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:47:04.777627  512816 out.go:252]   - Booting up control plane ...
	I1227 20:47:04.777736  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:47:04.777814  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:47:04.777883  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:47:04.799331  512816 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:47:04.799441  512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:47:04.809862  512816 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:47:04.809965  512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:47:04.810006  512816 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:47:04.984729  512816 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:47:04.984850  512816 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:51:04.984612  512816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000227461s
	I1227 20:51:04.984646  512816 kubeadm.go:319] 
	I1227 20:51:04.984705  512816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:51:04.984745  512816 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:51:04.984855  512816 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:51:04.984862  512816 kubeadm.go:319] 
	I1227 20:51:04.984968  512816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:51:04.985005  512816 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:51:04.985041  512816 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:51:04.985048  512816 kubeadm.go:319] 
	I1227 20:51:04.990635  512816 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:51:04.991126  512816 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:51:04.991276  512816 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:51:04.991544  512816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:51:04.991557  512816 kubeadm.go:319] 
	I1227 20:51:04.991627  512816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:51:04.991684  512816 kubeadm.go:403] duration metric: took 8m7.029842058s to StartCluster
	I1227 20:51:04.991734  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:51:04.991795  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:51:05.027213  512816 cri.go:96] found id: ""
	I1227 20:51:05.027263  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.027273  512816 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:51:05.027283  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 20:51:05.027361  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:51:05.071930  512816 cri.go:96] found id: ""
	I1227 20:51:05.071965  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.071975  512816 logs.go:284] No container was found matching "etcd"
	I1227 20:51:05.071982  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 20:51:05.072053  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:51:05.107389  512816 cri.go:96] found id: ""
	I1227 20:51:05.107457  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.107479  512816 logs.go:284] No container was found matching "coredns"
	I1227 20:51:05.107501  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:51:05.107591  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:51:05.134987  512816 cri.go:96] found id: ""
	I1227 20:51:05.135061  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.135085  512816 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:51:05.135108  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:51:05.135234  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:51:05.165609  512816 cri.go:96] found id: ""
	I1227 20:51:05.165637  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.165646  512816 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:51:05.165653  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:51:05.165737  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:51:05.194172  512816 cri.go:96] found id: ""
	I1227 20:51:05.194198  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.194208  512816 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:51:05.194215  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 20:51:05.194319  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:51:05.222973  512816 cri.go:96] found id: ""
	I1227 20:51:05.223048  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.223072  512816 logs.go:284] No container was found matching "kindnet"
	I1227 20:51:05.223100  512816 logs.go:123] Gathering logs for kubelet ...
	I1227 20:51:05.223136  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:51:05.281082  512816 logs.go:123] Gathering logs for dmesg ...
	I1227 20:51:05.281117  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:51:05.296692  512816 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:51:05.296723  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:51:05.369377  512816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:51:05.360528    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.361234    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.362904    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.363554    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.365159    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:51:05.360528    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.361234    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.362904    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.363554    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.365159    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:51:05.369414  512816 logs.go:123] Gathering logs for containerd ...
	I1227 20:51:05.369427  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 20:51:05.409653  512816 logs.go:123] Gathering logs for container status ...
	I1227 20:51:05.409736  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 20:51:05.438490  512816 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000227461s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:51:05.438603  512816 out.go:285] * 
	* 
	W1227 20:51:05.438809  512816 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000227461s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000227461s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:51:05.438888  512816 out.go:285] * 
	* 
	W1227 20:51:05.439283  512816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:51:05.446314  512816 out.go:203] 
	W1227 20:51:05.449296  512816 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000227461s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000227461s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:51:05.449357  512816 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:51:05.449379  512816 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:51:05.452446  512816 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-875839 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-875839 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 20:51:05.807551173 +0000 UTC m=+3343.326692542
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-875839
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-875839:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479",
	        "Created": "2025-12-27T20:42:49.481957215Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 513250,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:42:49.541236963Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479/hostname",
	        "HostsPath": "/var/lib/docker/containers/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479/hosts",
	        "LogPath": "/var/lib/docker/containers/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479/51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479-json.log",
	        "Name": "/force-systemd-flag-875839",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-875839:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-875839",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51a34498e61d9994e10467e8c2664238bcd3bf09c00d13871ab2321d19024479",
	                "LowerDir": "/var/lib/docker/overlay2/318700413bb57e4b591bd4cf47a946692bce00ab93f013e0ac25b15591e84ff2-init/diff:/var/lib/docker/overlay2/3aa037d6df727552c898397d6b697d27a219037ea9700eb1f4b4eaf57c46a788/diff",
	                "MergedDir": "/var/lib/docker/overlay2/318700413bb57e4b591bd4cf47a946692bce00ab93f013e0ac25b15591e84ff2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/318700413bb57e4b591bd4cf47a946692bce00ab93f013e0ac25b15591e84ff2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/318700413bb57e4b591bd4cf47a946692bce00ab93f013e0ac25b15591e84ff2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-875839",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-875839/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-875839",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-875839",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-875839",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ec8b28bb9195d136ae7929fe8ef067500c7b4146ac5cfa62d00f1b9143618ff",
	            "SandboxKey": "/var/run/docker/netns/7ec8b28bb919",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-875839": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:1b:cb:db:66:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5ecbce927db28a7fe0fa1b2174604ca2b9dda404938126b7566e4272488dff0",
	                    "EndpointID": "db8cb25113df553cef72dd34d92105a5feac3d21173f13b1d948f19796785c6e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-875839",
	                        "51a34498e61d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-875839 -n force-systemd-flag-875839
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-875839 -n force-systemd-flag-875839: exit status 6 (345.581968ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:51:06.163669  544004 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-875839" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-875839 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ addons  │ enable metrics-server -p old-k8s-version-551586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ stop    │ -p old-k8s-version-551586 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-551586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:45 UTC │
	│ start   │ -p old-k8s-version-551586 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:45 UTC │ 27 Dec 25 20:46 UTC │
	│ image   │ old-k8s-version-551586 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
	│ pause   │ -p old-k8s-version-551586 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
	│ unpause │ -p old-k8s-version-551586 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
	│ delete  │ -p old-k8s-version-551586                                                                                                                                                                                                                           │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
	│ delete  │ -p old-k8s-version-551586                                                                                                                                                                                                                           │ old-k8s-version-551586    │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:46 UTC │
	│ start   │ -p no-preload-259913 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:46 UTC │ 27 Dec 25 20:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-259913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:47 UTC │ 27 Dec 25 20:47 UTC │
	│ stop    │ -p no-preload-259913 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:47 UTC │ 27 Dec 25 20:48 UTC │
	│ addons  │ enable dashboard -p no-preload-259913 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ start   │ -p no-preload-259913 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:48 UTC │ 27 Dec 25 20:48 UTC │
	│ image   │ no-preload-259913 image list --format=json                                                                                                                                                                                                          │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ pause   │ -p no-preload-259913 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ unpause │ -p no-preload-259913 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p no-preload-259913                                                                                                                                                                                                                                │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ delete  │ -p no-preload-259913                                                                                                                                                                                                                                │ no-preload-259913         │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ start   │ -p embed-certs-920276 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-920276        │ jenkins │ v1.37.0 │ 27 Dec 25 20:49 UTC │ 27 Dec 25 20:49 UTC │
	│ addons  │ enable metrics-server -p embed-certs-920276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-920276        │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:50 UTC │
	│ stop    │ -p embed-certs-920276 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-920276        │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:50 UTC │
	│ addons  │ enable dashboard -p embed-certs-920276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-920276        │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │ 27 Dec 25 20:50 UTC │
	│ start   │ -p embed-certs-920276 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-920276        │ jenkins │ v1.37.0 │ 27 Dec 25 20:50 UTC │                     │
	│ ssh     │ force-systemd-flag-875839 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-875839 │ jenkins │ v1.37.0 │ 27 Dec 25 20:51 UTC │ 27 Dec 25 20:51 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:50:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:50:16.009614  540878 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:50:16.009824  540878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:50:16.009873  540878 out.go:374] Setting ErrFile to fd 2...
	I1227 20:50:16.009898  540878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:50:16.010215  540878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:50:16.010710  540878 out.go:368] Setting JSON to false
	I1227 20:50:16.011723  540878 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9167,"bootTime":1766859449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 20:50:16.011832  540878 start.go:143] virtualization:  
	I1227 20:50:16.015258  540878 out.go:179] * [embed-certs-920276] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:50:16.019266  540878 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:50:16.019350  540878 notify.go:221] Checking for updates...
	I1227 20:50:16.025175  540878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:50:16.028336  540878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:50:16.031417  540878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 20:50:16.034546  540878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:50:16.037603  540878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:50:16.041121  540878 config.go:182] Loaded profile config "embed-certs-920276": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:50:16.041768  540878 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:50:16.072984  540878 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:50:16.073117  540878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:50:16.134428  540878 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:50:16.124444602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:50:16.134543  540878 docker.go:319] overlay module found
	I1227 20:50:16.137702  540878 out.go:179] * Using the docker driver based on existing profile
	I1227 20:50:16.140505  540878 start.go:309] selected driver: docker
	I1227 20:50:16.140530  540878 start.go:928] validating driver "docker" against &{Name:embed-certs-920276 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:50:16.140652  540878 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:50:16.141429  540878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:50:16.204255  540878 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:50:16.19533537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:50:16.204604  540878 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:50:16.204633  540878 cni.go:84] Creating CNI manager for ""
	I1227 20:50:16.204694  540878 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 20:50:16.204733  540878 start.go:353] cluster config:
	{Name:embed-certs-920276 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:50:16.208096  540878 out.go:179] * Starting "embed-certs-920276" primary control-plane node in "embed-certs-920276" cluster
	I1227 20:50:16.210965  540878 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 20:50:16.213876  540878 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:50:16.216711  540878 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:50:16.216761  540878 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 20:50:16.216779  540878 cache.go:65] Caching tarball of preloaded images
	I1227 20:50:16.216782  540878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:50:16.216868  540878 preload.go:251] Found /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 20:50:16.216879  540878 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 20:50:16.217000  540878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/config.json ...
	I1227 20:50:16.236850  540878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:50:16.236873  540878 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:50:16.236890  540878 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:50:16.236922  540878 start.go:360] acquireMachinesLock for embed-certs-920276: {Name:mk59d29820c96aa85d20d8a3a5e4016f0bf5a9a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:50:16.236984  540878 start.go:364] duration metric: took 38.564µs to acquireMachinesLock for "embed-certs-920276"
	I1227 20:50:16.237007  540878 start.go:96] Skipping create...Using existing machine configuration
	I1227 20:50:16.237013  540878 fix.go:54] fixHost starting: 
	I1227 20:50:16.237287  540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
	I1227 20:50:16.254350  540878 fix.go:112] recreateIfNeeded on embed-certs-920276: state=Stopped err=<nil>
	W1227 20:50:16.254383  540878 fix.go:138] unexpected machine state, will restart: <nil>
	I1227 20:50:16.257664  540878 out.go:252] * Restarting existing docker container for "embed-certs-920276" ...
	I1227 20:50:16.257766  540878 cli_runner.go:164] Run: docker start embed-certs-920276
	I1227 20:50:16.519076  540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
	I1227 20:50:16.538059  540878 kic.go:430] container "embed-certs-920276" state is running.
	I1227 20:50:16.538448  540878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-920276
	I1227 20:50:16.563853  540878 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/config.json ...
	I1227 20:50:16.564098  540878 machine.go:94] provisionDockerMachine start ...
	I1227 20:50:16.564169  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:16.583540  540878 main.go:144] libmachine: Using SSH client type: native
	I1227 20:50:16.583868  540878 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1227 20:50:16.583878  540878 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:50:16.584757  540878 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59522->127.0.0.1:33451: read: connection reset by peer
	I1227 20:50:19.722845  540878 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-920276
	
	I1227 20:50:19.722874  540878 ubuntu.go:182] provisioning hostname "embed-certs-920276"
	I1227 20:50:19.722958  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:19.740925  540878 main.go:144] libmachine: Using SSH client type: native
	I1227 20:50:19.741253  540878 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1227 20:50:19.741271  540878 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-920276 && echo "embed-certs-920276" | sudo tee /etc/hostname
	I1227 20:50:19.892459  540878 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-920276
	
	I1227 20:50:19.892548  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:19.910533  540878 main.go:144] libmachine: Using SSH client type: native
	I1227 20:50:19.910860  540878 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1227 20:50:19.910876  540878 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-920276' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-920276/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-920276' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:50:20.054893  540878 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:50:20.054918  540878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-300670/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-300670/.minikube}
	I1227 20:50:20.054958  540878 ubuntu.go:190] setting up certificates
	I1227 20:50:20.054967  540878 provision.go:84] configureAuth start
	I1227 20:50:20.055026  540878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-920276
	I1227 20:50:20.075740  540878 provision.go:143] copyHostCerts
	I1227 20:50:20.075817  540878 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem, removing ...
	I1227 20:50:20.075833  540878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
	I1227 20:50:20.075919  540878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem (1082 bytes)
	I1227 20:50:20.076024  540878 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem, removing ...
	I1227 20:50:20.076029  540878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
	I1227 20:50:20.076056  540878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem (1123 bytes)
	I1227 20:50:20.076112  540878 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem, removing ...
	I1227 20:50:20.076117  540878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
	I1227 20:50:20.076140  540878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem (1679 bytes)
	I1227 20:50:20.076187  540878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem org=jenkins.embed-certs-920276 san=[127.0.0.1 192.168.85.2 embed-certs-920276 localhost minikube]
	I1227 20:50:20.841497  540878 provision.go:177] copyRemoteCerts
	I1227 20:50:20.841575  540878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:50:20.841619  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:20.858882  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:20.959134  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:50:20.977501  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 20:50:20.997002  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 20:50:21.016593  540878 provision.go:87] duration metric: took 961.611509ms to configureAuth
	I1227 20:50:21.016620  540878 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:50:21.016821  540878 config.go:182] Loaded profile config "embed-certs-920276": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:50:21.016829  540878 machine.go:97] duration metric: took 4.452715376s to provisionDockerMachine
	I1227 20:50:21.016836  540878 start.go:293] postStartSetup for "embed-certs-920276" (driver="docker")
	I1227 20:50:21.016846  540878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:50:21.016906  540878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:50:21.016948  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:21.034110  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:21.131561  540878 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:50:21.135122  540878 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:50:21.135149  540878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:50:21.135161  540878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/addons for local assets ...
	I1227 20:50:21.135244  540878 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/files for local assets ...
	I1227 20:50:21.135319  540878 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> 3025412.pem in /etc/ssl/certs
	I1227 20:50:21.135419  540878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:50:21.143536  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /etc/ssl/certs/3025412.pem (1708 bytes)
	I1227 20:50:21.162629  540878 start.go:296] duration metric: took 145.776326ms for postStartSetup
	I1227 20:50:21.162767  540878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:50:21.162811  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:21.180184  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:21.280723  540878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:50:21.285776  540878 fix.go:56] duration metric: took 5.048755717s for fixHost
	I1227 20:50:21.285803  540878 start.go:83] releasing machines lock for "embed-certs-920276", held for 5.048807016s
	I1227 20:50:21.285878  540878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-920276
	I1227 20:50:21.302621  540878 ssh_runner.go:195] Run: cat /version.json
	I1227 20:50:21.302676  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:21.302946  540878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:50:21.303011  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:21.324721  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:21.333253  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:21.541785  540878 ssh_runner.go:195] Run: systemctl --version
	I1227 20:50:21.550188  540878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:50:21.555235  540878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:50:21.555307  540878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:50:21.564159  540878 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1227 20:50:21.564187  540878 start.go:496] detecting cgroup driver to use...
	I1227 20:50:21.564219  540878 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 20:50:21.564270  540878 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 20:50:21.582495  540878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 20:50:21.598352  540878 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:50:21.598440  540878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:50:21.614194  540878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:50:21.628000  540878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:50:21.737467  540878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:50:21.856279  540878 docker.go:234] disabling docker service ...
	I1227 20:50:21.856433  540878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:50:21.872148  540878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:50:21.885634  540878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:50:22.009069  540878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:50:22.132051  540878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:50:22.145487  540878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:50:22.159934  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 20:50:22.169330  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 20:50:22.179268  540878 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1227 20:50:22.179388  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1227 20:50:22.188886  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:50:22.197995  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 20:50:22.206973  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:50:22.216466  540878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:50:22.224747  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 20:50:22.233914  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 20:50:22.243547  540878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 20:50:22.253157  540878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:50:22.261154  540878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:50:22.269079  540878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:50:22.417152  540878 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 20:50:22.573171  540878 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 20:50:22.573296  540878 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 20:50:22.577551  540878 start.go:574] Will wait 60s for crictl version
	I1227 20:50:22.577629  540878 ssh_runner.go:195] Run: which crictl
	I1227 20:50:22.581782  540878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:50:22.607740  540878 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 20:50:22.607819  540878 ssh_runner.go:195] Run: containerd --version
	I1227 20:50:22.631731  540878 ssh_runner.go:195] Run: containerd --version
	I1227 20:50:22.657731  540878 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 20:50:22.660751  540878 cli_runner.go:164] Run: docker network inspect embed-certs-920276 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:50:22.677239  540878 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:50:22.681297  540878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:50:22.691421  540878 kubeadm.go:884] updating cluster {Name:embed-certs-920276 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:50:22.691541  540878 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:50:22.691616  540878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:50:22.721764  540878 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 20:50:22.721792  540878 containerd.go:542] Images already preloaded, skipping extraction
	I1227 20:50:22.721868  540878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:50:22.747556  540878 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 20:50:22.747583  540878 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:50:22.747591  540878 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1227 20:50:22.747701  540878 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-920276 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:50:22.747783  540878 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 20:50:22.773222  540878 cni.go:84] Creating CNI manager for ""
	I1227 20:50:22.773248  540878 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 20:50:22.773306  540878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:50:22.773339  540878 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-920276 NodeName:embed-certs-920276 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:50:22.773474  540878 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-920276"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:50:22.773549  540878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:50:22.781463  540878 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:50:22.781534  540878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:50:22.789363  540878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1227 20:50:22.802095  540878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:50:22.814688  540878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
	I1227 20:50:22.827306  540878 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:50:22.831055  540878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:50:22.841449  540878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:50:22.952233  540878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:50:22.969630  540878 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276 for IP: 192.168.85.2
	I1227 20:50:22.969653  540878 certs.go:195] generating shared ca certs ...
	I1227 20:50:22.969668  540878 certs.go:227] acquiring lock for ca certs: {Name:mkf93c4b7b6f0a265527090e39bdf731f6a1491b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:50:22.969841  540878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key
	I1227 20:50:22.969895  540878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key
	I1227 20:50:22.969908  540878 certs.go:257] generating profile certs ...
	I1227 20:50:22.969996  540878 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/client.key
	I1227 20:50:22.970070  540878 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/apiserver.key.fca527cf
	I1227 20:50:22.970115  540878 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/proxy-client.key
	I1227 20:50:22.970226  540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem (1338 bytes)
	W1227 20:50:22.970263  540878 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541_empty.pem, impossibly tiny 0 bytes
	I1227 20:50:22.970275  540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:50:22.970301  540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem (1082 bytes)
	I1227 20:50:22.970329  540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:50:22.970357  540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem (1679 bytes)
	I1227 20:50:22.970412  540878 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem (1708 bytes)
	I1227 20:50:22.971022  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:50:22.992907  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:50:23.012614  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:50:23.039920  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:50:23.085951  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 20:50:23.108228  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:50:23.135883  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:50:23.165679  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/embed-certs-920276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:50:23.188520  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem --> /usr/share/ca-certificates/302541.pem (1338 bytes)
	I1227 20:50:23.215897  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /usr/share/ca-certificates/3025412.pem (1708 bytes)
	I1227 20:50:23.234234  540878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:50:23.272269  540878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:50:23.285745  540878 ssh_runner.go:195] Run: openssl version
	I1227 20:50:23.294229  540878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3025412.pem
	I1227 20:50:23.304072  540878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3025412.pem /etc/ssl/certs/3025412.pem
	I1227 20:50:23.313157  540878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3025412.pem
	I1227 20:50:23.317340  540878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:01 /usr/share/ca-certificates/3025412.pem
	I1227 20:50:23.317423  540878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3025412.pem
	I1227 20:50:23.359282  540878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:50:23.367427  540878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:50:23.375513  540878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:50:23.383433  540878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:50:23.387312  540878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:50:23.387383  540878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:50:23.428769  540878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:50:23.436784  540878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/302541.pem
	I1227 20:50:23.444881  540878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/302541.pem /etc/ssl/certs/302541.pem
	I1227 20:50:23.453132  540878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302541.pem
	I1227 20:50:23.457190  540878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:01 /usr/share/ca-certificates/302541.pem
	I1227 20:50:23.457275  540878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302541.pem
	I1227 20:50:23.499008  540878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:50:23.506926  540878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:50:23.510865  540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1227 20:50:23.552378  540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1227 20:50:23.600920  540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1227 20:50:23.645107  540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1227 20:50:23.687352  540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1227 20:50:23.739989  540878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1227 20:50:23.829044  540878 kubeadm.go:401] StartCluster: {Name:embed-certs-920276 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-920276 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:50:23.829200  540878 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 20:50:23.829312  540878 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:50:23.883973  540878 cri.go:96] found id: "2f173cac0f9685e08a95dd6f68a4ef15dc9852da4472a8d02f39aa3cc8109a83"
	I1227 20:50:23.884058  540878 cri.go:96] found id: "97df2f4ff1d74c17de7edf548cb377ce6d8127c7abcc5115f1c261fbf453f2b7"
	I1227 20:50:23.884079  540878 cri.go:96] found id: "96c0eef1af88cbdf7bad08bd45a3a95a244df655c59b89680d0574df7851aa36"
	I1227 20:50:23.884099  540878 cri.go:96] found id: "f7b535b8d6864bb0b7f11f80357ef9d6b37ddd5b7bec49646da3f88fc8651894"
	I1227 20:50:23.884130  540878 cri.go:96] found id: "bf5f50631af9fc470adedd8fe5c7c8eb2b4721b6f2e133c19cdb0545fa44131f"
	I1227 20:50:23.884153  540878 cri.go:96] found id: "462eb09c51ab0f37fe7780e5ee4429fb5d2162825bcfc0c17411f23245ee849d"
	I1227 20:50:23.884259  540878 cri.go:96] found id: "caf47dec1770cd41b912d94c45f19879a2fa0df92e005492498bc5827a53bebf"
	I1227 20:50:23.884279  540878 cri.go:96] found id: "b4bf5714be87c6bce43295497a8d95936539734d668f791ac9545a602dc8f481"
	I1227 20:50:23.884297  540878 cri.go:96] found id: ""
	I1227 20:50:23.884386  540878 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1227 20:50:23.924357  540878 cri.go:123] JSON = [{"ociVersion":"1.2.1","id":"56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5","pid":904,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5/rootfs","created":"2025-12-27T20:50:23.81196477Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-920276_3df5a8a212961647d3066b74c35eb3ab","io.kubernetes.cri.sandbox-memory":"0","io
.kubernetes.cri.sandbox-name":"etcd-embed-certs-920276","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3df5a8a212961647d3066b74c35eb3ab"},"owner":"root"},{"ociVersion":"1.2.1","id":"73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed","pid":957,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed/rootfs","created":"2025-12-27T20:50:23.903685959Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed","io.kubernetes.cri.sandbox-log-dire
ctory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-920276_7bb6d5e38042126465933f10ab5bbf65","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-embed-certs-920276","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7bb6d5e38042126465933f10ab5bbf65"},"owner":"root"},{"ociVersion":"1.2.1","id":"c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a","pid":918,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a/rootfs","created":"2025-12-27T20:50:23.808543271Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.ku
bernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-920276_19ec925d2741946aa51ff0f936fea0eb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-embed-certs-920276","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"19ec925d2741946aa51ff0f936fea0eb"},"owner":"root"}]
	I1227 20:50:23.924553  540878 cri.go:133] list returned 3 containers
	I1227 20:50:23.924594  540878 cri.go:136] container: {ID:56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5 Status:created}
	I1227 20:50:23.924629  540878 cri.go:138] skipping 56d9073ab735501c02dee5f7f12db20f121d07114d7ab986e67d986a250ae7a5 - not in ps
	I1227 20:50:23.924665  540878 cri.go:136] container: {ID:73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed Status:created}
	I1227 20:50:23.924690  540878 cri.go:138] skipping 73039e4b619374e430b32bb41ebc24043874e560c531466b5fe24644fd13e8ed - not in ps
	I1227 20:50:23.924710  540878 cri.go:136] container: {ID:c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a Status:running}
	I1227 20:50:23.924744  540878 cri.go:138] skipping c703b9a7fb1fcc5bb2dfdcc2fe4b8ab02db66ffa9cb457a724ab0fa27399156a - not in ps
	I1227 20:50:23.924842  540878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:50:23.937410  540878 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1227 20:50:23.937485  540878 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1227 20:50:23.937587  540878 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1227 20:50:23.946616  540878 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1227 20:50:23.947144  540878 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-920276" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:50:23.947400  540878 kubeconfig.go:62] /home/jenkins/minikube-integration/22332-300670/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-920276" cluster setting kubeconfig missing "embed-certs-920276" context setting]
	I1227 20:50:23.947756  540878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/kubeconfig: {Name:mke76863c55a53bb5beeec750cba490366e88e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:50:23.949419  540878 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1227 20:50:23.968578  540878 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1227 20:50:23.968663  540878 kubeadm.go:602] duration metric: took 31.15895ms to restartPrimaryControlPlane
	I1227 20:50:23.968748  540878 kubeadm.go:403] duration metric: took 139.715352ms to StartCluster
	I1227 20:50:23.968782  540878 settings.go:142] acquiring lock: {Name:mk48481ad33e4d60aedaf03b00ac874fd5c339d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:50:23.968871  540878 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:50:23.970006  540878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/kubeconfig: {Name:mke76863c55a53bb5beeec750cba490366e88e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:50:23.970341  540878 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 20:50:23.970844  540878 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 20:50:23.970928  540878 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-920276"
	I1227 20:50:23.970942  540878 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-920276"
	W1227 20:50:23.970948  540878 addons.go:248] addon storage-provisioner should already be in state true
	I1227 20:50:23.970972  540878 host.go:66] Checking if "embed-certs-920276" exists ...
	I1227 20:50:23.971681  540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
	I1227 20:50:23.972006  540878 config.go:182] Loaded profile config "embed-certs-920276": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:50:23.972096  540878 addons.go:70] Setting default-storageclass=true in profile "embed-certs-920276"
	I1227 20:50:23.972154  540878 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-920276"
	I1227 20:50:23.972490  540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
	I1227 20:50:23.978682  540878 addons.go:70] Setting dashboard=true in profile "embed-certs-920276"
	I1227 20:50:23.978774  540878 addons.go:239] Setting addon dashboard=true in "embed-certs-920276"
	W1227 20:50:23.978798  540878 addons.go:248] addon dashboard should already be in state true
	I1227 20:50:23.978865  540878 host.go:66] Checking if "embed-certs-920276" exists ...
	I1227 20:50:23.979499  540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
	I1227 20:50:23.979691  540878 addons.go:70] Setting metrics-server=true in profile "embed-certs-920276"
	I1227 20:50:23.979728  540878 addons.go:239] Setting addon metrics-server=true in "embed-certs-920276"
	W1227 20:50:23.979752  540878 addons.go:248] addon metrics-server should already be in state true
	I1227 20:50:23.979808  540878 host.go:66] Checking if "embed-certs-920276" exists ...
	I1227 20:50:23.980264  540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
	I1227 20:50:23.994054  540878 out.go:179] * Verifying Kubernetes components...
	I1227 20:50:23.997367  540878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:50:24.012702  540878 addons.go:239] Setting addon default-storageclass=true in "embed-certs-920276"
	W1227 20:50:24.012738  540878 addons.go:248] addon default-storageclass should already be in state true
	I1227 20:50:24.012763  540878 host.go:66] Checking if "embed-certs-920276" exists ...
	I1227 20:50:24.013215  540878 cli_runner.go:164] Run: docker container inspect embed-certs-920276 --format={{.State.Status}}
	I1227 20:50:24.049152  540878 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 20:50:24.052125  540878 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:50:24.052149  540878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 20:50:24.052226  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:24.067360  540878 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 20:50:24.067387  540878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 20:50:24.067450  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:24.081498  540878 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1227 20:50:24.081638  540878 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1227 20:50:24.084448  540878 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1227 20:50:24.084520  540878 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1227 20:50:24.084532  540878 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1227 20:50:24.084594  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:24.090221  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1227 20:50:24.090271  540878 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1227 20:50:24.090392  540878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-920276
	I1227 20:50:24.118091  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:24.146640  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:24.153146  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:24.164331  540878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/embed-certs-920276/id_rsa Username:docker}
	I1227 20:50:24.301755  540878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:50:24.362291  540878 node_ready.go:35] waiting up to 6m0s for node "embed-certs-920276" to be "Ready" ...
	I1227 20:50:24.432110  540878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 20:50:24.474944  540878 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1227 20:50:24.475020  540878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1227 20:50:24.574982  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1227 20:50:24.575061  540878 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1227 20:50:24.611847  540878 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1227 20:50:24.611926  540878 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1227 20:50:24.653894  540878 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 20:50:24.653976  540878 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1227 20:50:24.680931  540878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 20:50:24.817157  540878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1227 20:50:24.829273  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1227 20:50:24.829359  540878 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1227 20:50:24.944579  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1227 20:50:24.944661  540878 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1227 20:50:25.031754  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1227 20:50:25.031844  540878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1227 20:50:25.167665  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1227 20:50:25.167762  540878 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1227 20:50:25.337071  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1227 20:50:25.337154  540878 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1227 20:50:25.495644  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1227 20:50:25.495725  540878 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1227 20:50:25.545877  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1227 20:50:25.545900  540878 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1227 20:50:25.584252  540878 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:50:25.584277  540878 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1227 20:50:25.614496  540878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1227 20:50:27.875637  540878 node_ready.go:49] node "embed-certs-920276" is "Ready"
	I1227 20:50:27.875679  540878 node_ready.go:38] duration metric: took 3.513294685s for node "embed-certs-920276" to be "Ready" ...
	I1227 20:50:27.875698  540878 api_server.go:52] waiting for apiserver process to appear ...
	I1227 20:50:27.875758  540878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:50:28.193223  540878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.761025119s)
	I1227 20:50:30.740883  540878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.059868575s)
	I1227 20:50:30.804397  540878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.98715762s)
	I1227 20:50:30.804434  540878 addons.go:495] Verifying addon metrics-server=true in "embed-certs-920276"
	I1227 20:50:30.804544  540878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.190021364s)
	I1227 20:50:30.804696  540878 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.928927357s)
	I1227 20:50:30.804718  540878 api_server.go:72] duration metric: took 6.834310177s to wait for apiserver process to appear ...
	I1227 20:50:30.804725  540878 api_server.go:88] waiting for apiserver healthz status ...
	I1227 20:50:30.804744  540878 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1227 20:50:30.807568  540878 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-920276 addons enable metrics-server
	
	I1227 20:50:30.810693  540878 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1227 20:50:30.813254  540878 api_server.go:325] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1227 20:50:30.814246  540878 api_server.go:141] control plane version: v1.35.0
	I1227 20:50:30.814273  540878 api_server.go:131] duration metric: took 9.541191ms to wait for apiserver health ...
	I1227 20:50:30.814284  540878 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 20:50:30.814548  540878 addons.go:530] duration metric: took 6.843706785s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1227 20:50:30.817883  540878 system_pods.go:59] 9 kube-system pods found
	I1227 20:50:30.817928  540878 system_pods.go:61] "coredns-7d764666f9-fsvn9" [db2bf94a-5c69-4a44-b6cc-d70fcb4b7df8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:50:30.817937  540878 system_pods.go:61] "etcd-embed-certs-920276" [6c5c45b2-36fb-4a5d-be57-37fbf3d73d1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:50:30.817944  540878 system_pods.go:61] "kindnet-nhb2c" [823229c3-d885-4f86-a40a-1c7d2e155396] Running
	I1227 20:50:30.817951  540878 system_pods.go:61] "kube-apiserver-embed-certs-920276" [d60204e3-a1c1-4e78-a489-97ca9d0e3b5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:50:30.817957  540878 system_pods.go:61] "kube-controller-manager-embed-certs-920276" [625d25e3-c1e7-44af-a45e-d95244ada624] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:50:30.817968  540878 system_pods.go:61] "kube-proxy-shcp6" [e4fe1ebf-141f-4b36-9612-ae8f13f002b8] Running
	I1227 20:50:30.817975  540878 system_pods.go:61] "kube-scheduler-embed-certs-920276" [f2fff7ad-0808-4a82-92a9-be7f96fa5383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:50:30.817986  540878 system_pods.go:61] "metrics-server-5d785b57d4-qjjgk" [0a5f5853-ba0f-4a69-aea1-b88e86d0d92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 20:50:30.817999  540878 system_pods.go:61] "storage-provisioner" [cbb2eec5-c485-4b1f-ad76-bf4511e17a05] Running
	I1227 20:50:30.818006  540878 system_pods.go:74] duration metric: took 3.716295ms to wait for pod list to return data ...
	I1227 20:50:30.818017  540878 default_sa.go:34] waiting for default service account to be created ...
	I1227 20:50:30.820965  540878 default_sa.go:45] found service account: "default"
	I1227 20:50:30.820992  540878 default_sa.go:55] duration metric: took 2.967497ms for default service account to be created ...
	I1227 20:50:30.821004  540878 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 20:50:30.824451  540878 system_pods.go:86] 9 kube-system pods found
	I1227 20:50:30.824486  540878 system_pods.go:89] "coredns-7d764666f9-fsvn9" [db2bf94a-5c69-4a44-b6cc-d70fcb4b7df8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 20:50:30.824495  540878 system_pods.go:89] "etcd-embed-certs-920276" [6c5c45b2-36fb-4a5d-be57-37fbf3d73d1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 20:50:30.824502  540878 system_pods.go:89] "kindnet-nhb2c" [823229c3-d885-4f86-a40a-1c7d2e155396] Running
	I1227 20:50:30.824509  540878 system_pods.go:89] "kube-apiserver-embed-certs-920276" [d60204e3-a1c1-4e78-a489-97ca9d0e3b5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1227 20:50:30.824516  540878 system_pods.go:89] "kube-controller-manager-embed-certs-920276" [625d25e3-c1e7-44af-a45e-d95244ada624] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 20:50:30.824521  540878 system_pods.go:89] "kube-proxy-shcp6" [e4fe1ebf-141f-4b36-9612-ae8f13f002b8] Running
	I1227 20:50:30.824528  540878 system_pods.go:89] "kube-scheduler-embed-certs-920276" [f2fff7ad-0808-4a82-92a9-be7f96fa5383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1227 20:50:30.824540  540878 system_pods.go:89] "metrics-server-5d785b57d4-qjjgk" [0a5f5853-ba0f-4a69-aea1-b88e86d0d92a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1227 20:50:30.824553  540878 system_pods.go:89] "storage-provisioner" [cbb2eec5-c485-4b1f-ad76-bf4511e17a05] Running
	I1227 20:50:30.824561  540878 system_pods.go:126] duration metric: took 3.551708ms to wait for k8s-apps to be running ...
	I1227 20:50:30.824572  540878 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 20:50:30.824629  540878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:50:30.838985  540878 system_svc.go:56] duration metric: took 14.403352ms WaitForService to wait for kubelet
	I1227 20:50:30.839017  540878 kubeadm.go:587] duration metric: took 6.868606767s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 20:50:30.839039  540878 node_conditions.go:102] verifying NodePressure condition ...
	I1227 20:50:30.842050  540878 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 20:50:30.842143  540878 node_conditions.go:123] node cpu capacity is 2
	I1227 20:50:30.842164  540878 node_conditions.go:105] duration metric: took 3.12004ms to run NodePressure ...
	I1227 20:50:30.842178  540878 start.go:242] waiting for startup goroutines ...
	I1227 20:50:30.842186  540878 start.go:247] waiting for cluster config update ...
	I1227 20:50:30.842212  540878 start.go:256] writing updated cluster config ...
	I1227 20:50:30.842545  540878 ssh_runner.go:195] Run: rm -f paused
	I1227 20:50:30.847321  540878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 20:50:30.850939  540878 pod_ready.go:83] waiting for pod "coredns-7d764666f9-fsvn9" in "kube-system" namespace to be "Ready" or be gone ...
	W1227 20:50:32.857142  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:34.862026  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:37.356522  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:39.356733  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:41.357208  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:43.857335  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:46.357025  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:48.856979  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:51.356140  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:53.356678  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:55.856464  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:50:58.356596  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:51:00.358635  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	I1227 20:51:04.984612  512816 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000227461s
	I1227 20:51:04.984646  512816 kubeadm.go:319] 
	I1227 20:51:04.984705  512816 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:51:04.984745  512816 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:51:04.984855  512816 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:51:04.984862  512816 kubeadm.go:319] 
	I1227 20:51:04.984968  512816 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:51:04.985005  512816 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:51:04.985041  512816 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:51:04.985048  512816 kubeadm.go:319] 
	I1227 20:51:04.990635  512816 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:51:04.991126  512816 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:51:04.991276  512816 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:51:04.991544  512816 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:51:04.991557  512816 kubeadm.go:319] 
	I1227 20:51:04.991627  512816 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:51:04.991684  512816 kubeadm.go:403] duration metric: took 8m7.029842058s to StartCluster
	I1227 20:51:04.991734  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:51:04.991795  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:51:05.027213  512816 cri.go:96] found id: ""
	I1227 20:51:05.027263  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.027273  512816 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:51:05.027283  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 20:51:05.027361  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:51:05.071930  512816 cri.go:96] found id: ""
	I1227 20:51:05.071965  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.071975  512816 logs.go:284] No container was found matching "etcd"
	I1227 20:51:05.071982  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 20:51:05.072053  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:51:05.107389  512816 cri.go:96] found id: ""
	I1227 20:51:05.107457  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.107479  512816 logs.go:284] No container was found matching "coredns"
	I1227 20:51:05.107501  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:51:05.107591  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:51:05.134987  512816 cri.go:96] found id: ""
	I1227 20:51:05.135061  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.135085  512816 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:51:05.135108  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:51:05.135234  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:51:05.165609  512816 cri.go:96] found id: ""
	I1227 20:51:05.165637  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.165646  512816 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:51:05.165653  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:51:05.165737  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:51:05.194172  512816 cri.go:96] found id: ""
	I1227 20:51:05.194198  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.194208  512816 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:51:05.194215  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 20:51:05.194319  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:51:05.222973  512816 cri.go:96] found id: ""
	I1227 20:51:05.223048  512816 logs.go:282] 0 containers: []
	W1227 20:51:05.223072  512816 logs.go:284] No container was found matching "kindnet"
	I1227 20:51:05.223100  512816 logs.go:123] Gathering logs for kubelet ...
	I1227 20:51:05.223136  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:51:05.281082  512816 logs.go:123] Gathering logs for dmesg ...
	I1227 20:51:05.281117  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:51:05.296692  512816 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:51:05.296723  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:51:05.369377  512816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:51:05.360528    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.361234    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.362904    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.363554    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.365159    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:51:05.360528    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.361234    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.362904    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.363554    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:05.365159    4811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:51:05.369414  512816 logs.go:123] Gathering logs for containerd ...
	I1227 20:51:05.369427  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 20:51:05.409653  512816 logs.go:123] Gathering logs for container status ...
	I1227 20:51:05.409736  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 20:51:05.438490  512816 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000227461s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:51:05.438603  512816 out.go:285] * 
	W1227 20:51:05.438809  512816 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000227461s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:51:05.438888  512816 out.go:285] * 
	W1227 20:51:05.439283  512816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:51:05.446314  512816 out.go:203] 
	W1227 20:51:05.449296  512816 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000227461s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:51:05.449357  512816 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:51:05.449379  512816 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:51:05.452446  512816 out.go:203] 
	W1227 20:51:02.857475  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	W1227 20:51:04.858731  540878 pod_ready.go:104] pod "coredns-7d764666f9-fsvn9" is not "Ready", error: <nil>
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.864969739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865041789Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865147160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865231206Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865299276Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865363252Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865430608Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865491655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865557896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.865646208Z" level=info msg="Connect containerd service"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.866021768Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.867416857Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.884643720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.884710961Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.884741731Z" level=info msg="Start subscribing containerd event"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.884804829Z" level=info msg="Start recovering state"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922408981Z" level=info msg="Start event monitor"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922614997Z" level=info msg="Start cni network conf syncer for default"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922685012Z" level=info msg="Start streaming server"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922745500Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922808648Z" level=info msg="runtime interface starting up..."
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922863614Z" level=info msg="starting plugins..."
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.922948579Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 20:42:55 force-systemd-flag-875839 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 27 20:42:55 force-systemd-flag-875839 containerd[757]: time="2025-12-27T20:42:55.926796686Z" level=info msg="containerd successfully booted in 0.086871s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:51:06.876511    4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:06.877351    4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:06.879380    4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:06.880066    4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:51:06.881873    4942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 20:51:06 up  2:33,  0 user,  load average: 0.79, 1.31, 1.72
	Linux force-systemd-flag-875839 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 20:51:03 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:51:04 force-systemd-flag-875839 kubelet[4737]: E1227 20:51:04.330770    4737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:51:04 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:51:05 force-systemd-flag-875839 kubelet[4751]: E1227 20:51:05.102901    4751 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:51:05 force-systemd-flag-875839 kubelet[4835]: E1227 20:51:05.867822    4835 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:51:05 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:51:06 force-systemd-flag-875839 kubelet[4874]: E1227 20:51:06.617424    4874 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:51:06 force-systemd-flag-875839 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-875839 -n force-systemd-flag-875839
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-875839 -n force-systemd-flag-875839: exit status 6 (350.487906ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:51:07.347973  544224 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-875839" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-875839" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-875839" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-875839
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-875839: (2.01079817s)
--- FAIL: TestForceSystemdFlag (504.97s)

                                                
                                    
x
+
TestForceSystemdEnv (506.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-857112 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1227 20:35:54.880160  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:37:51.828960  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-857112 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m22.269466277s)

                                                
                                                
-- stdout --
	* [force-systemd-env-857112] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-857112" primary control-plane node in "force-systemd-env-857112" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:35:20.427864  490122 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:35:20.428381  490122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:35:20.428395  490122 out.go:374] Setting ErrFile to fd 2...
	I1227 20:35:20.428402  490122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:35:20.429227  490122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:35:20.429843  490122 out.go:368] Setting JSON to false
	I1227 20:35:20.430877  490122 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8272,"bootTime":1766859449,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 20:35:20.431068  490122 start.go:143] virtualization:  
	I1227 20:35:20.435047  490122 out.go:179] * [force-systemd-env-857112] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:35:20.439903  490122 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:35:20.439974  490122 notify.go:221] Checking for updates...
	I1227 20:35:20.446833  490122 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:35:20.450273  490122 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:35:20.453603  490122 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 20:35:20.456883  490122 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:35:20.460149  490122 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1227 20:35:20.463856  490122 config.go:182] Loaded profile config "running-upgrade-108405": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1227 20:35:20.463962  490122 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:35:20.495596  490122 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:35:20.495717  490122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:35:20.553393  490122 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:35:20.544225637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:35:20.553510  490122 docker.go:319] overlay module found
	I1227 20:35:20.556868  490122 out.go:179] * Using the docker driver based on user configuration
	I1227 20:35:20.559892  490122 start.go:309] selected driver: docker
	I1227 20:35:20.559917  490122 start.go:928] validating driver "docker" against <nil>
	I1227 20:35:20.559932  490122 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:35:20.560716  490122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:35:20.611882  490122 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:35:20.602979784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:35:20.612028  490122 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:35:20.612249  490122 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:35:20.615399  490122 out.go:179] * Using Docker driver with root privileges
	I1227 20:35:20.618396  490122 cni.go:84] Creating CNI manager for ""
	I1227 20:35:20.618476  490122 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 20:35:20.618492  490122 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:35:20.618583  490122 start.go:353] cluster config:
	{Name:force-systemd-env-857112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-857112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:35:20.621767  490122 out.go:179] * Starting "force-systemd-env-857112" primary control-plane node in "force-systemd-env-857112" cluster
	I1227 20:35:20.624702  490122 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 20:35:20.627719  490122 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:35:20.630564  490122 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:35:20.630623  490122 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 20:35:20.630633  490122 cache.go:65] Caching tarball of preloaded images
	I1227 20:35:20.630732  490122 preload.go:251] Found /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 20:35:20.630748  490122 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 20:35:20.630859  490122 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/config.json ...
	I1227 20:35:20.630883  490122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/config.json: {Name:mk8ae58e6a0a3c6bf55627a22601aece862e0181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:35:20.631068  490122 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:35:20.650935  490122 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:35:20.650956  490122 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:35:20.650977  490122 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:35:20.651010  490122 start.go:360] acquireMachinesLock for force-systemd-env-857112: {Name:mk1f559b31a5831b8c1baba4f81a36902f215320 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:35:20.651123  490122 start.go:364] duration metric: took 90.593µs to acquireMachinesLock for "force-systemd-env-857112"
	I1227 20:35:20.651152  490122 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-857112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-857112 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 20:35:20.651258  490122 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:35:20.656496  490122 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:35:20.656770  490122 start.go:159] libmachine.API.Create for "force-systemd-env-857112" (driver="docker")
	I1227 20:35:20.656815  490122 client.go:173] LocalClient.Create starting
	I1227 20:35:20.656896  490122 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem
	I1227 20:35:20.656948  490122 main.go:144] libmachine: Decoding PEM data...
	I1227 20:35:20.656970  490122 main.go:144] libmachine: Parsing certificate...
	I1227 20:35:20.657022  490122 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem
	I1227 20:35:20.657047  490122 main.go:144] libmachine: Decoding PEM data...
	I1227 20:35:20.657063  490122 main.go:144] libmachine: Parsing certificate...
	I1227 20:35:20.657438  490122 cli_runner.go:164] Run: docker network inspect force-systemd-env-857112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:35:20.680794  490122 cli_runner.go:211] docker network inspect force-systemd-env-857112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:35:20.680889  490122 network_create.go:284] running [docker network inspect force-systemd-env-857112] to gather additional debugging logs...
	I1227 20:35:20.680913  490122 cli_runner.go:164] Run: docker network inspect force-systemd-env-857112
	W1227 20:35:20.697996  490122 cli_runner.go:211] docker network inspect force-systemd-env-857112 returned with exit code 1
	I1227 20:35:20.698032  490122 network_create.go:287] error running [docker network inspect force-systemd-env-857112]: docker network inspect force-systemd-env-857112: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-857112 not found
	I1227 20:35:20.698046  490122 network_create.go:289] output of [docker network inspect force-systemd-env-857112]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-857112 not found
	
	** /stderr **
	I1227 20:35:20.698161  490122 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:35:20.715245  490122 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-39a3264d8f81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:08:2a:c8:87:59} reservation:<nil>}
	I1227 20:35:20.715645  490122 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ad751755a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:9d:74:07:ce:ba} reservation:<nil>}
	I1227 20:35:20.715916  490122 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f84ef5e3062f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:ef:60:e2:0e:e4} reservation:<nil>}
	I1227 20:35:20.716307  490122 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d17127b6380d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fa:6f:6f:9c:d9:0d} reservation:<nil>}
	I1227 20:35:20.716767  490122 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cb920}
	I1227 20:35:20.716791  490122 network_create.go:124] attempt to create docker network force-systemd-env-857112 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 20:35:20.716859  490122 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-857112 force-systemd-env-857112
	I1227 20:35:20.781536  490122 network_create.go:108] docker network force-systemd-env-857112 192.168.85.0/24 created
	I1227 20:35:20.781573  490122 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-857112" container
	I1227 20:35:20.781646  490122 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:35:20.807602  490122 cli_runner.go:164] Run: docker volume create force-systemd-env-857112 --label name.minikube.sigs.k8s.io=force-systemd-env-857112 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:35:20.833693  490122 oci.go:103] Successfully created a docker volume force-systemd-env-857112
	I1227 20:35:20.833789  490122 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-857112-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-857112 --entrypoint /usr/bin/test -v force-systemd-env-857112:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:35:21.459876  490122 oci.go:107] Successfully prepared a docker volume force-systemd-env-857112
	I1227 20:35:21.459959  490122 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:35:21.459975  490122 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:35:21.460045  490122 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-857112:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:35:25.853667  490122 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-857112:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.393580685s)
	I1227 20:35:25.853698  490122 kic.go:203] duration metric: took 4.393719746s to extract preloaded images to volume ...
	W1227 20:35:25.853839  490122 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:35:25.853948  490122 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:35:25.939362  490122 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-857112 --name force-systemd-env-857112 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-857112 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-857112 --network force-systemd-env-857112 --ip 192.168.85.2 --volume force-systemd-env-857112:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:35:26.307388  490122 cli_runner.go:164] Run: docker container inspect force-systemd-env-857112 --format={{.State.Running}}
	I1227 20:35:26.331002  490122 cli_runner.go:164] Run: docker container inspect force-systemd-env-857112 --format={{.State.Status}}
	I1227 20:35:26.354733  490122 cli_runner.go:164] Run: docker exec force-systemd-env-857112 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:35:26.421856  490122 oci.go:144] the created container "force-systemd-env-857112" has a running status.
	I1227 20:35:26.421908  490122 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-env-857112/id_rsa...
	I1227 20:35:26.712420  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-env-857112/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 20:35:26.712464  490122 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-env-857112/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:35:26.741064  490122 cli_runner.go:164] Run: docker container inspect force-systemd-env-857112 --format={{.State.Status}}
	I1227 20:35:26.770381  490122 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:35:26.770401  490122 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-857112 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:35:26.837020  490122 cli_runner.go:164] Run: docker container inspect force-systemd-env-857112 --format={{.State.Status}}
	I1227 20:35:26.861088  490122 machine.go:94] provisionDockerMachine start ...
	I1227 20:35:26.861196  490122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-857112
	I1227 20:35:26.885986  490122 main.go:144] libmachine: Using SSH client type: native
	I1227 20:35:26.886349  490122 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1227 20:35:26.886361  490122 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:35:26.887071  490122 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55390->127.0.0.1:33391: read: connection reset by peer
	I1227 20:35:30.075320  490122 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-857112
	
	I1227 20:35:30.075409  490122 ubuntu.go:182] provisioning hostname "force-systemd-env-857112"
	I1227 20:35:30.075524  490122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-857112
	I1227 20:35:30.096961  490122 main.go:144] libmachine: Using SSH client type: native
	I1227 20:35:30.097291  490122 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1227 20:35:30.097322  490122 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-857112 && echo "force-systemd-env-857112" | sudo tee /etc/hostname
	I1227 20:35:30.248760  490122 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-857112
	
	I1227 20:35:30.248877  490122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-857112
	I1227 20:35:30.266578  490122 main.go:144] libmachine: Using SSH client type: native
	I1227 20:35:30.266902  490122 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33391 <nil> <nil>}
	I1227 20:35:30.266926  490122 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-857112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-857112/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-857112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:35:30.403677  490122 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:35:30.403703  490122 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-300670/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-300670/.minikube}
	I1227 20:35:30.403723  490122 ubuntu.go:190] setting up certificates
	I1227 20:35:30.403733  490122 provision.go:84] configureAuth start
	I1227 20:35:30.403794  490122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-857112
	I1227 20:35:30.420696  490122 provision.go:143] copyHostCerts
	I1227 20:35:30.420748  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
	I1227 20:35:30.420782  490122 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem, removing ...
	I1227 20:35:30.420803  490122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
	I1227 20:35:30.420891  490122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem (1082 bytes)
	I1227 20:35:30.420971  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
	I1227 20:35:30.420994  490122 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem, removing ...
	I1227 20:35:30.420998  490122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
	I1227 20:35:30.421027  490122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem (1123 bytes)
	I1227 20:35:30.421065  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
	I1227 20:35:30.421079  490122 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem, removing ...
	I1227 20:35:30.421083  490122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
	I1227 20:35:30.421106  490122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem (1679 bytes)
	I1227 20:35:30.421149  490122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-857112 san=[127.0.0.1 192.168.85.2 force-systemd-env-857112 localhost minikube]
	I1227 20:35:30.553536  490122 provision.go:177] copyRemoteCerts
	I1227 20:35:30.553609  490122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:35:30.553655  490122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-857112
	I1227 20:35:30.572454  490122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-env-857112/id_rsa Username:docker}
	I1227 20:35:30.675524  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:35:30.675627  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 20:35:30.695461  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:35:30.695531  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 20:35:30.712532  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:35:30.712602  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 20:35:30.730513  490122 provision.go:87] duration metric: took 326.749374ms to configureAuth
	I1227 20:35:30.730546  490122 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:35:30.730733  490122 config.go:182] Loaded profile config "force-systemd-env-857112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:35:30.730748  490122 machine.go:97] duration metric: took 3.869641647s to provisionDockerMachine
	I1227 20:35:30.730755  490122 client.go:176] duration metric: took 10.073930094s to LocalClient.Create
	I1227 20:35:30.730776  490122 start.go:167] duration metric: took 10.074008667s to libmachine.API.Create "force-systemd-env-857112"
	I1227 20:35:30.730792  490122 start.go:293] postStartSetup for "force-systemd-env-857112" (driver="docker")
	I1227 20:35:30.730801  490122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:35:30.730857  490122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:35:30.730899  490122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-857112
	I1227 20:35:30.748710  490122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-env-857112/id_rsa Username:docker}
	I1227 20:35:30.847165  490122 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:35:30.850560  490122 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:35:30.850596  490122 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:35:30.850608  490122 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/addons for local assets ...
	I1227 20:35:30.850663  490122 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/files for local assets ...
	I1227 20:35:30.850744  490122 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> 3025412.pem in /etc/ssl/certs
	I1227 20:35:30.850755  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> /etc/ssl/certs/3025412.pem
	I1227 20:35:30.850858  490122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:35:30.858422  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /etc/ssl/certs/3025412.pem (1708 bytes)
	I1227 20:35:30.876340  490122 start.go:296] duration metric: took 145.533175ms for postStartSetup
	I1227 20:35:30.876767  490122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-857112
	I1227 20:35:30.893572  490122 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/config.json ...
	I1227 20:35:30.893858  490122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:35:30.893928  490122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-857112
	I1227 20:35:30.911157  490122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-env-857112/id_rsa Username:docker}
	I1227 20:35:31.008567  490122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:35:31.013615  490122 start.go:128] duration metric: took 10.362341212s to createHost
	I1227 20:35:31.013644  490122 start.go:83] releasing machines lock for "force-systemd-env-857112", held for 10.362507194s
	I1227 20:35:31.013725  490122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-857112
	I1227 20:35:31.030938  490122 ssh_runner.go:195] Run: cat /version.json
	I1227 20:35:31.030987  490122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-857112
	I1227 20:35:31.031345  490122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:35:31.031404  490122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-857112
	I1227 20:35:31.048102  490122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-env-857112/id_rsa Username:docker}
	I1227 20:35:31.061683  490122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33391 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-env-857112/id_rsa Username:docker}
	I1227 20:35:31.147138  490122 ssh_runner.go:195] Run: systemctl --version
	I1227 20:35:31.248126  490122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:35:31.252433  490122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:35:31.252561  490122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:35:31.280278  490122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:35:31.280313  490122 start.go:496] detecting cgroup driver to use...
	I1227 20:35:31.280333  490122 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:35:31.280385  490122 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 20:35:31.296721  490122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 20:35:31.309712  490122 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:35:31.309808  490122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:35:31.327979  490122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:35:31.347357  490122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:35:31.479306  490122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:35:31.603287  490122 docker.go:234] disabling docker service ...
	I1227 20:35:31.603418  490122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:35:31.627897  490122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:35:31.641713  490122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:35:31.768863  490122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:35:31.888294  490122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:35:31.901683  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:35:31.915982  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 20:35:31.925473  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 20:35:31.934402  490122 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 20:35:31.934487  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 20:35:31.943799  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:35:31.952388  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 20:35:31.960909  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:35:31.969385  490122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:35:31.977293  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 20:35:31.985948  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 20:35:31.995169  490122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 20:35:32.006656  490122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:35:32.014370  490122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:35:32.022110  490122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:35:32.132562  490122 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 20:35:32.262791  490122 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 20:35:32.262902  490122 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 20:35:32.266998  490122 start.go:574] Will wait 60s for crictl version
	I1227 20:35:32.267087  490122 ssh_runner.go:195] Run: which crictl
	I1227 20:35:32.270628  490122 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:35:32.298366  490122 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 20:35:32.298478  490122 ssh_runner.go:195] Run: containerd --version
	I1227 20:35:32.322318  490122 ssh_runner.go:195] Run: containerd --version
	I1227 20:35:32.349567  490122 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 20:35:32.352616  490122 cli_runner.go:164] Run: docker network inspect force-systemd-env-857112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:35:32.368203  490122 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 20:35:32.372108  490122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:35:32.381902  490122 kubeadm.go:884] updating cluster {Name:force-systemd-env-857112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-857112 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:35:32.382025  490122 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:35:32.382098  490122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:35:32.408898  490122 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 20:35:32.408925  490122 containerd.go:542] Images already preloaded, skipping extraction
	I1227 20:35:32.408985  490122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:35:32.434226  490122 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 20:35:32.434254  490122 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:35:32.434269  490122 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1227 20:35:32.434363  490122 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-857112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-857112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:35:32.434436  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 20:35:32.468673  490122 cni.go:84] Creating CNI manager for ""
	I1227 20:35:32.468700  490122 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 20:35:32.468720  490122 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:35:32.468744  490122 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-857112 NodeName:force-systemd-env-857112 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:35:32.468878  490122 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-857112"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:35:32.468951  490122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:35:32.477056  490122 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:35:32.477123  490122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:35:32.486918  490122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1227 20:35:32.508127  490122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:35:32.528274  490122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1227 20:35:32.545789  490122 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:35:32.552030  490122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:35:32.561794  490122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:35:32.722041  490122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:35:32.746593  490122 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112 for IP: 192.168.85.2
	I1227 20:35:32.746615  490122 certs.go:195] generating shared ca certs ...
	I1227 20:35:32.746631  490122 certs.go:227] acquiring lock for ca certs: {Name:mkf93c4b7b6f0a265527090e39bdf731f6a1491b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:35:32.746792  490122 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key
	I1227 20:35:32.746846  490122 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key
	I1227 20:35:32.746858  490122 certs.go:257] generating profile certs ...
	I1227 20:35:32.746917  490122 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/client.key
	I1227 20:35:32.746935  490122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/client.crt with IP's: []
	I1227 20:35:33.615901  490122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/client.crt ...
	I1227 20:35:33.615933  490122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/client.crt: {Name:mk0e28a8325683d416c8cce084539b6a014737a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:35:33.616121  490122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/client.key ...
	I1227 20:35:33.616131  490122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/client.key: {Name:mk946995d56225fa7f3b6db38f359975d57df25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:35:33.616210  490122 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.key.78d6a1af
	I1227 20:35:33.616224  490122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.crt.78d6a1af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 20:35:33.720932  490122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.crt.78d6a1af ...
	I1227 20:35:33.720966  490122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.crt.78d6a1af: {Name:mk0a2b7593cfde494e21f5e948c417318b937598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:35:33.721136  490122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.key.78d6a1af ...
	I1227 20:35:33.721153  490122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.key.78d6a1af: {Name:mk07d25b5775e7e764d57d17a68749818c91ca01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:35:33.721233  490122 certs.go:382] copying /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.crt.78d6a1af -> /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.crt
	I1227 20:35:33.721314  490122 certs.go:386] copying /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.key.78d6a1af -> /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.key
	I1227 20:35:33.721385  490122 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.key
	I1227 20:35:33.721404  490122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.crt with IP's: []
	I1227 20:35:33.778764  490122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.crt ...
	I1227 20:35:33.778790  490122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.crt: {Name:mk0b79006efd4bbb185b9640688f4ebdae4c84e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:35:33.779020  490122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.key ...
	I1227 20:35:33.779038  490122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.key: {Name:mke7338ade46e92529f428bc6c2866fc7d776505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:35:33.779165  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:35:33.779233  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:35:33.779248  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:35:33.779260  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:35:33.779299  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:35:33.779318  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:35:33.779365  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:35:33.779383  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:35:33.779453  490122 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem (1338 bytes)
	W1227 20:35:33.779493  490122 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541_empty.pem, impossibly tiny 0 bytes
	I1227 20:35:33.779502  490122 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:35:33.779550  490122 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem (1082 bytes)
	I1227 20:35:33.779579  490122 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:35:33.779616  490122 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem (1679 bytes)
	I1227 20:35:33.779668  490122 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem (1708 bytes)
	I1227 20:35:33.779713  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> /usr/share/ca-certificates/3025412.pem
	I1227 20:35:33.779726  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:35:33.779738  490122 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem -> /usr/share/ca-certificates/302541.pem
	I1227 20:35:33.780389  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:35:33.803004  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:35:33.826543  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:35:33.844979  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:35:33.872272  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 20:35:33.899088  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 20:35:33.923711  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:35:33.944871  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-env-857112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1227 20:35:33.965064  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /usr/share/ca-certificates/3025412.pem (1708 bytes)
	I1227 20:35:33.995314  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:35:34.018208  490122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem --> /usr/share/ca-certificates/302541.pem (1338 bytes)
	I1227 20:35:34.044775  490122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:35:34.069134  490122 ssh_runner.go:195] Run: openssl version
	I1227 20:35:34.077360  490122 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:35:34.086179  490122 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:35:34.093950  490122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:35:34.098194  490122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:35:34.098274  490122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:35:34.142150  490122 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:35:34.149553  490122 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:35:34.156931  490122 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/302541.pem
	I1227 20:35:34.164383  490122 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/302541.pem /etc/ssl/certs/302541.pem
	I1227 20:35:34.171985  490122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302541.pem
	I1227 20:35:34.175873  490122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:01 /usr/share/ca-certificates/302541.pem
	I1227 20:35:34.175960  490122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302541.pem
	I1227 20:35:34.216879  490122 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:35:34.225162  490122 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/302541.pem /etc/ssl/certs/51391683.0
	I1227 20:35:34.232761  490122 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3025412.pem
	I1227 20:35:34.240648  490122 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3025412.pem /etc/ssl/certs/3025412.pem
	I1227 20:35:34.248066  490122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3025412.pem
	I1227 20:35:34.251941  490122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:01 /usr/share/ca-certificates/3025412.pem
	I1227 20:35:34.252012  490122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3025412.pem
	I1227 20:35:34.298110  490122 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:35:34.305734  490122 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3025412.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:35:34.313156  490122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:35:34.317736  490122 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:35:34.317785  490122 kubeadm.go:401] StartCluster: {Name:force-systemd-env-857112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-857112 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:35:34.317859  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 20:35:34.317919  490122 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:35:34.345072  490122 cri.go:96] found id: ""
	I1227 20:35:34.345171  490122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:35:34.353581  490122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:35:34.361710  490122 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:35:34.361798  490122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:35:34.369846  490122 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:35:34.369917  490122 kubeadm.go:158] found existing configuration files:
	
	I1227 20:35:34.369977  490122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:35:34.377920  490122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:35:34.378008  490122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:35:34.386159  490122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:35:34.393965  490122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:35:34.394082  490122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:35:34.401870  490122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:35:34.409479  490122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:35:34.409595  490122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:35:34.417017  490122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:35:34.425146  490122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:35:34.425241  490122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:35:34.432865  490122 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:35:34.472969  490122 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:35:34.473076  490122 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:35:34.540387  490122 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:35:34.540512  490122 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:35:34.540590  490122 kubeadm.go:319] OS: Linux
	I1227 20:35:34.540679  490122 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:35:34.540759  490122 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:35:34.540837  490122 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:35:34.540919  490122 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:35:34.540994  490122 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:35:34.541072  490122 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:35:34.541149  490122 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:35:34.541223  490122 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:35:34.541304  490122 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:35:34.604735  490122 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:35:34.604908  490122 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:35:34.605026  490122 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:35:34.613481  490122 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:35:34.619569  490122 out.go:252]   - Generating certificates and keys ...
	I1227 20:35:34.619738  490122 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:35:34.619861  490122 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:35:35.277237  490122 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:35:35.826070  490122 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:35:35.933341  490122 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:35:36.293351  490122 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:35:36.455419  490122 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:35:36.455753  490122 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-857112 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:35:36.739604  490122 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:35:36.739838  490122 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-857112 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 20:35:37.017000  490122 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:35:37.149939  490122 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:35:37.429044  490122 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:35:37.429505  490122 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:35:37.679939  490122 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:35:37.899612  490122 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:35:38.244962  490122 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:35:38.844466  490122 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:35:39.390827  490122 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:35:39.392006  490122 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:35:39.395885  490122 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:35:39.399570  490122 out.go:252]   - Booting up control plane ...
	I1227 20:35:39.399696  490122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:35:39.399779  490122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:35:39.399854  490122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:35:39.418131  490122 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:35:39.418249  490122 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:35:39.427731  490122 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:35:39.428040  490122 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:35:39.428287  490122 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:35:39.564940  490122 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:35:39.565061  490122 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:39:39.562578  490122 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00113582s
	I1227 20:39:39.562671  490122 kubeadm.go:319] 
	I1227 20:39:39.562731  490122 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:39:39.562771  490122 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:39:39.562879  490122 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:39:39.562889  490122 kubeadm.go:319] 
	I1227 20:39:39.563018  490122 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:39:39.563057  490122 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:39:39.563093  490122 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:39:39.563102  490122 kubeadm.go:319] 
	I1227 20:39:39.572921  490122 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:39:39.573351  490122 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:39:39.573458  490122 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:39:39.573693  490122 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:39:39.573699  490122 kubeadm.go:319] 
	I1227 20:39:39.573768  490122 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 20:39:39.573900  490122 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-857112 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-857112 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113582s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-857112 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-857112 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00113582s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 20:39:39.573979  490122 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1227 20:39:40.004756  490122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:39:40.032995  490122 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:39:40.033150  490122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:39:40.046795  490122 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:39:40.046874  490122 kubeadm.go:158] found existing configuration files:
	
	I1227 20:39:40.046966  490122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:39:40.057958  490122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:39:40.058088  490122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:39:40.067939  490122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:39:40.080057  490122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:39:40.080189  490122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:39:40.091022  490122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:39:40.102444  490122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:39:40.102567  490122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:39:40.112807  490122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:39:40.123792  490122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:39:40.123913  490122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:39:40.133053  490122 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:39:40.188950  490122 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:39:40.189497  490122 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:39:40.305877  490122 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:39:40.306036  490122 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:39:40.306119  490122 kubeadm.go:319] OS: Linux
	I1227 20:39:40.306211  490122 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:39:40.306291  490122 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:39:40.306374  490122 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:39:40.306495  490122 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:39:40.306588  490122 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:39:40.306681  490122 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:39:40.306775  490122 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:39:40.306859  490122 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:39:40.306944  490122 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:39:40.392146  490122 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:39:40.392326  490122 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:39:40.392457  490122 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:39:40.399793  490122 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:39:40.406690  490122 out.go:252]   - Generating certificates and keys ...
	I1227 20:39:40.406788  490122 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:39:40.406859  490122 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:39:40.406938  490122 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 20:39:40.407002  490122 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 20:39:40.407077  490122 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 20:39:40.407137  490122 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 20:39:40.407280  490122 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 20:39:40.407350  490122 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 20:39:40.407429  490122 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 20:39:40.407542  490122 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 20:39:40.407813  490122 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 20:39:40.407993  490122 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:39:40.465980  490122 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:39:40.698641  490122 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:39:41.518321  490122 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:39:41.813253  490122 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:39:41.972266  490122 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:39:41.972980  490122 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:39:41.975548  490122 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:39:41.978677  490122 out.go:252]   - Booting up control plane ...
	I1227 20:39:41.978788  490122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:39:41.978873  490122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:39:41.979380  490122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:39:42.004087  490122 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:39:42.004448  490122 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:39:42.013504  490122 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:39:42.013840  490122 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:39:42.013892  490122 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:39:42.179699  490122 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:39:42.179832  490122 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:43:42.180572  490122 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002384899s
	I1227 20:43:42.180603  490122 kubeadm.go:319] 
	I1227 20:43:42.180662  490122 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:43:42.180696  490122 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:43:42.181245  490122 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:43:42.181273  490122 kubeadm.go:319] 
	I1227 20:43:42.181550  490122 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:43:42.181612  490122 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:43:42.181673  490122 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:43:42.181680  490122 kubeadm.go:319] 
	I1227 20:43:42.190437  490122 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:43:42.190957  490122 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:43:42.191079  490122 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:43:42.191447  490122 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:43:42.191488  490122 kubeadm.go:319] 
	I1227 20:43:42.191645  490122 kubeadm.go:403] duration metric: took 8m7.873865664s to StartCluster
	I1227 20:43:42.191691  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:43:42.191740  490122 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:43:42.191765  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:43:42.220088  490122 cri.go:96] found id: ""
	I1227 20:43:42.220132  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.220143  490122 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:43:42.220150  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 20:43:42.220226  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:43:42.250604  490122 cri.go:96] found id: ""
	I1227 20:43:42.250629  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.250638  490122 logs.go:284] No container was found matching "etcd"
	I1227 20:43:42.250645  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 20:43:42.250710  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:43:42.290197  490122 cri.go:96] found id: ""
	I1227 20:43:42.290221  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.290230  490122 logs.go:284] No container was found matching "coredns"
	I1227 20:43:42.290237  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:43:42.290298  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:43:42.319099  490122 cri.go:96] found id: ""
	I1227 20:43:42.319172  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.319244  490122 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:43:42.319264  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:43:42.319393  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:43:42.356871  490122 cri.go:96] found id: ""
	I1227 20:43:42.356896  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.356905  490122 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:43:42.356912  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:43:42.356972  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:43:42.381385  490122 cri.go:96] found id: ""
	I1227 20:43:42.381410  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.381420  490122 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:43:42.381427  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 20:43:42.381487  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:43:42.405560  490122 cri.go:96] found id: ""
	I1227 20:43:42.405588  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.405609  490122 logs.go:284] No container was found matching "kindnet"
	I1227 20:43:42.405638  490122 logs.go:123] Gathering logs for kubelet ...
	I1227 20:43:42.405658  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:43:42.465711  490122 logs.go:123] Gathering logs for dmesg ...
	I1227 20:43:42.465748  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:43:42.480491  490122 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:43:42.480521  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:43:42.551679  490122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:43:42.542984    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.543806    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.545464    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.545794    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.547357    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:43:42.542984    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.543806    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.545464    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.545794    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.547357    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:43:42.551707  490122 logs.go:123] Gathering logs for containerd ...
	I1227 20:43:42.551718  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 20:43:42.595963  490122 logs.go:123] Gathering logs for container status ...
	I1227 20:43:42.596039  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 20:43:42.628573  490122 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002384899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:43:42.628630  490122 out.go:285] * 
	* 
	W1227 20:43:42.628681  490122 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002384899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002384899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:43:42.628699  490122 out.go:285] * 
	* 
	W1227 20:43:42.628950  490122 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:43:42.634684  490122 out.go:203] 
	W1227 20:43:42.638442  490122 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002384899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002384899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:43:42.638490  490122 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:43:42.638512  490122 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:43:42.641703  490122 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-857112 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-857112 ssh "cat /etc/containerd/config.toml"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-27 20:43:43.004080794 +0000 UTC m=+2900.523222163
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-857112
helpers_test.go:244: (dbg) docker inspect force-systemd-env-857112:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "362784cde925d2bdd6992b56b137cbf307988fbf2b5243a1b53870b523815866",
	        "Created": "2025-12-27T20:35:25.96467671Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490820,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T20:35:26.034587214Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/362784cde925d2bdd6992b56b137cbf307988fbf2b5243a1b53870b523815866/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/362784cde925d2bdd6992b56b137cbf307988fbf2b5243a1b53870b523815866/hostname",
	        "HostsPath": "/var/lib/docker/containers/362784cde925d2bdd6992b56b137cbf307988fbf2b5243a1b53870b523815866/hosts",
	        "LogPath": "/var/lib/docker/containers/362784cde925d2bdd6992b56b137cbf307988fbf2b5243a1b53870b523815866/362784cde925d2bdd6992b56b137cbf307988fbf2b5243a1b53870b523815866-json.log",
	        "Name": "/force-systemd-env-857112",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-857112:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-857112",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "362784cde925d2bdd6992b56b137cbf307988fbf2b5243a1b53870b523815866",
	                "LowerDir": "/var/lib/docker/overlay2/63c878d2076e4965c127c28f50630dd6c1eb9f98a42c7bf141363e5892bfbb0f-init/diff:/var/lib/docker/overlay2/3aa037d6df727552c898397d6b697d27a219037ea9700eb1f4b4eaf57c46a788/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63c878d2076e4965c127c28f50630dd6c1eb9f98a42c7bf141363e5892bfbb0f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63c878d2076e4965c127c28f50630dd6c1eb9f98a42c7bf141363e5892bfbb0f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63c878d2076e4965c127c28f50630dd6c1eb9f98a42c7bf141363e5892bfbb0f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-857112",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-857112/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-857112",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-857112",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-857112",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f78ef0be9f728347904237fe7f6a37f79bb3d719882cf15e09f1988dfd141f27",
	            "SandboxKey": "/var/run/docker/netns/f78ef0be9f72",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-857112": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:02:af:72:d2:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "403c21c8415815cbac85f1ab0656e0ba250801d5d28b38281015ae19ea6315fb",
	                    "EndpointID": "b49599569178f578140395c1891100be6f086f67cf7a5f9aa66523e6b5faf343",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-857112",
	                        "362784cde925"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-857112 -n force-systemd-env-857112
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-857112 -n force-systemd-env-857112: exit status 6 (347.611884ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:43:43.354018  515477 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-857112" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-857112 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-779255 sudo cat /var/lib/kubelet/config.yaml                                                                            │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo systemctl status docker --all --full --no-pager                                                             │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo systemctl cat docker --no-pager                                                                             │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo cat /etc/docker/daemon.json                                                                                 │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo docker system info                                                                                          │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo systemctl status cri-docker --all --full --no-pager                                                         │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo systemctl cat cri-docker --no-pager                                                                         │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                    │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo cat /usr/lib/systemd/system/cri-docker.service                                                              │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo cri-dockerd --version                                                                                       │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo systemctl status containerd --all --full --no-pager                                                         │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo systemctl cat containerd --no-pager                                                                         │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo cat /lib/systemd/system/containerd.service                                                                  │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo cat /etc/containerd/config.toml                                                                             │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo containerd config dump                                                                                      │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo systemctl status crio --all --full --no-pager                                                               │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo systemctl cat crio --no-pager                                                                               │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                     │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ ssh     │ -p cilium-779255 sudo crio config                                                                                                 │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │                     │
	│ delete  │ -p cilium-779255                                                                                                                  │ cilium-779255             │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │ 27 Dec 25 20:39 UTC │
	│ start   │ -p cert-expiration-794518 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                      │ cert-expiration-794518    │ jenkins │ v1.37.0 │ 27 Dec 25 20:39 UTC │ 27 Dec 25 20:39 UTC │
	│ start   │ -p cert-expiration-794518 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                   │ cert-expiration-794518    │ jenkins │ v1.37.0 │ 27 Dec 25 20:42 UTC │ 27 Dec 25 20:42 UTC │
	│ delete  │ -p cert-expiration-794518                                                                                                         │ cert-expiration-794518    │ jenkins │ v1.37.0 │ 27 Dec 25 20:42 UTC │ 27 Dec 25 20:42 UTC │
	│ start   │ -p force-systemd-flag-875839 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-flag-875839 │ jenkins │ v1.37.0 │ 27 Dec 25 20:42 UTC │                     │
	│ ssh     │ force-systemd-env-857112 ssh cat /etc/containerd/config.toml                                                                      │ force-systemd-env-857112  │ jenkins │ v1.37.0 │ 27 Dec 25 20:43 UTC │ 27 Dec 25 20:43 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 20:42:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 20:42:44.450614  512816 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:42:44.450748  512816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:42:44.450759  512816 out.go:374] Setting ErrFile to fd 2...
	I1227 20:42:44.450765  512816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:42:44.451046  512816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:42:44.451537  512816 out.go:368] Setting JSON to false
	I1227 20:42:44.452468  512816 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8716,"bootTime":1766859449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 20:42:44.452539  512816 start.go:143] virtualization:  
	I1227 20:42:44.456098  512816 out.go:179] * [force-systemd-flag-875839] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:42:44.460810  512816 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:42:44.460944  512816 notify.go:221] Checking for updates...
	I1227 20:42:44.467481  512816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:42:44.470760  512816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:42:44.474017  512816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 20:42:44.477177  512816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:42:44.480226  512816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:42:44.483786  512816 config.go:182] Loaded profile config "force-systemd-env-857112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:42:44.483901  512816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:42:44.514242  512816 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:42:44.514368  512816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:42:44.600674  512816 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:42:44.590030356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:42:44.600784  512816 docker.go:319] overlay module found
	I1227 20:42:44.603988  512816 out.go:179] * Using the docker driver based on user configuration
	I1227 20:42:44.606895  512816 start.go:309] selected driver: docker
	I1227 20:42:44.606918  512816 start.go:928] validating driver "docker" against <nil>
	I1227 20:42:44.606938  512816 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:42:44.607721  512816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:42:44.660643  512816 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:42:44.65175192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:42:44.660805  512816 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 20:42:44.661029  512816 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 20:42:44.663983  512816 out.go:179] * Using Docker driver with root privileges
	I1227 20:42:44.666777  512816 cni.go:84] Creating CNI manager for ""
	I1227 20:42:44.666837  512816 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 20:42:44.666853  512816 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 20:42:44.666931  512816 start.go:353] cluster config:
	{Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1227 20:42:44.670122  512816 out.go:179] * Starting "force-systemd-flag-875839" primary control-plane node in "force-systemd-flag-875839" cluster
	I1227 20:42:44.673023  512816 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 20:42:44.675977  512816 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 20:42:44.678899  512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:42:44.678926  512816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 20:42:44.678947  512816 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 20:42:44.678957  512816 cache.go:65] Caching tarball of preloaded images
	I1227 20:42:44.679037  512816 preload.go:251] Found /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 20:42:44.679046  512816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 20:42:44.679152  512816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json ...
	I1227 20:42:44.679204  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json: {Name:mk226d5712d36dc79e3bc51dc29625caf226ee6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:44.698707  512816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 20:42:44.698733  512816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 20:42:44.698766  512816 cache.go:243] Successfully downloaded all kic artifacts
	I1227 20:42:44.698799  512816 start.go:360] acquireMachinesLock for force-systemd-flag-875839: {Name:mka1cb79a66dbff1223f12a6e0653c935a407a1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 20:42:44.698917  512816 start.go:364] duration metric: took 96.443µs to acquireMachinesLock for "force-systemd-flag-875839"
	I1227 20:42:44.698951  512816 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 20:42:44.699019  512816 start.go:125] createHost starting for "" (driver="docker")
	I1227 20:42:44.702439  512816 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 20:42:44.702717  512816 start.go:159] libmachine.API.Create for "force-systemd-flag-875839" (driver="docker")
	I1227 20:42:44.702756  512816 client.go:173] LocalClient.Create starting
	I1227 20:42:44.702822  512816 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem
	I1227 20:42:44.702861  512816 main.go:144] libmachine: Decoding PEM data...
	I1227 20:42:44.702888  512816 main.go:144] libmachine: Parsing certificate...
	I1227 20:42:44.702941  512816 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem
	I1227 20:42:44.702963  512816 main.go:144] libmachine: Decoding PEM data...
	I1227 20:42:44.702975  512816 main.go:144] libmachine: Parsing certificate...
	I1227 20:42:44.703517  512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 20:42:44.719292  512816 cli_runner.go:211] docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 20:42:44.719369  512816 network_create.go:284] running [docker network inspect force-systemd-flag-875839] to gather additional debugging logs...
	I1227 20:42:44.719387  512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839
	W1227 20:42:44.733398  512816 cli_runner.go:211] docker network inspect force-systemd-flag-875839 returned with exit code 1
	I1227 20:42:44.733430  512816 network_create.go:287] error running [docker network inspect force-systemd-flag-875839]: docker network inspect force-systemd-flag-875839: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-875839 not found
	I1227 20:42:44.733442  512816 network_create.go:289] output of [docker network inspect force-systemd-flag-875839]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-875839 not found
	
	** /stderr **
	I1227 20:42:44.733536  512816 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:42:44.750679  512816 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-39a3264d8f81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:08:2a:c8:87:59} reservation:<nil>}
	I1227 20:42:44.751059  512816 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ad751755a00 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:9d:74:07:ce:ba} reservation:<nil>}
	I1227 20:42:44.751350  512816 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f84ef5e3062f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b6:ef:60:e2:0e:e4} reservation:<nil>}
	I1227 20:42:44.751800  512816 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a47a80}
	I1227 20:42:44.751824  512816 network_create.go:124] attempt to create docker network force-systemd-flag-875839 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 20:42:44.751879  512816 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-875839 force-systemd-flag-875839
	I1227 20:42:44.817033  512816 network_create.go:108] docker network force-systemd-flag-875839 192.168.76.0/24 created
	I1227 20:42:44.817068  512816 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-875839" container
	I1227 20:42:44.817162  512816 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 20:42:44.833900  512816 cli_runner.go:164] Run: docker volume create force-systemd-flag-875839 --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --label created_by.minikube.sigs.k8s.io=true
	I1227 20:42:44.855305  512816 oci.go:103] Successfully created a docker volume force-systemd-flag-875839
	I1227 20:42:44.855397  512816 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-875839-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --entrypoint /usr/bin/test -v force-systemd-flag-875839:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 20:42:45.520564  512816 oci.go:107] Successfully prepared a docker volume force-systemd-flag-875839
	I1227 20:42:45.520637  512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:42:45.520651  512816 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 20:42:45.520724  512816 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-875839:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 20:42:49.411447  512816 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-875839:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.890669583s)
	I1227 20:42:49.411480  512816 kic.go:203] duration metric: took 3.890825481s to extract preloaded images to volume ...
	W1227 20:42:49.411625  512816 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 20:42:49.411780  512816 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 20:42:49.466802  512816 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-875839 --name force-systemd-flag-875839 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-875839 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-875839 --network force-systemd-flag-875839 --ip 192.168.76.2 --volume force-systemd-flag-875839:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 20:42:49.764752  512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Running}}
	I1227 20:42:49.794580  512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
	I1227 20:42:49.818888  512816 cli_runner.go:164] Run: docker exec force-systemd-flag-875839 stat /var/lib/dpkg/alternatives/iptables
	I1227 20:42:49.884825  512816 oci.go:144] the created container "force-systemd-flag-875839" has a running status.
	I1227 20:42:49.884858  512816 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa...
	I1227 20:42:50.331141  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 20:42:50.331230  512816 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 20:42:50.354044  512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
	I1227 20:42:50.377426  512816 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 20:42:50.377459  512816 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-875839 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 20:42:50.420652  512816 cli_runner.go:164] Run: docker container inspect force-systemd-flag-875839 --format={{.State.Status}}
	I1227 20:42:50.438519  512816 machine.go:94] provisionDockerMachine start ...
	I1227 20:42:50.438612  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:50.456377  512816 main.go:144] libmachine: Using SSH client type: native
	I1227 20:42:50.456728  512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33416 <nil> <nil>}
	I1227 20:42:50.456744  512816 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 20:42:50.457445  512816 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 20:42:53.598911  512816 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-875839
	
	I1227 20:42:53.598937  512816 ubuntu.go:182] provisioning hostname "force-systemd-flag-875839"
	I1227 20:42:53.599044  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:53.617338  512816 main.go:144] libmachine: Using SSH client type: native
	I1227 20:42:53.617662  512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33416 <nil> <nil>}
	I1227 20:42:53.617679  512816 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-875839 && echo "force-systemd-flag-875839" | sudo tee /etc/hostname
	I1227 20:42:53.764333  512816 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-875839
	
	I1227 20:42:53.764479  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:53.782984  512816 main.go:144] libmachine: Using SSH client type: native
	I1227 20:42:53.783321  512816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33416 <nil> <nil>}
	I1227 20:42:53.783352  512816 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-875839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-875839/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-875839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 20:42:53.923458  512816 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 20:42:53.923486  512816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22332-300670/.minikube CaCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22332-300670/.minikube}
	I1227 20:42:53.923554  512816 ubuntu.go:190] setting up certificates
	I1227 20:42:53.923579  512816 provision.go:84] configureAuth start
	I1227 20:42:53.923657  512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
	I1227 20:42:53.941558  512816 provision.go:143] copyHostCerts
	I1227 20:42:53.941608  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
	I1227 20:42:53.941644  512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem, removing ...
	I1227 20:42:53.941656  512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem
	I1227 20:42:53.941740  512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/ca.pem (1082 bytes)
	I1227 20:42:53.941834  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
	I1227 20:42:53.941860  512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem, removing ...
	I1227 20:42:53.941879  512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem
	I1227 20:42:53.941908  512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/cert.pem (1123 bytes)
	I1227 20:42:53.941966  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
	I1227 20:42:53.941987  512816 exec_runner.go:144] found /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem, removing ...
	I1227 20:42:53.941997  512816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem
	I1227 20:42:53.942022  512816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22332-300670/.minikube/key.pem (1679 bytes)
	I1227 20:42:53.942086  512816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-875839 san=[127.0.0.1 192.168.76.2 force-systemd-flag-875839 localhost minikube]
	I1227 20:42:54.202929  512816 provision.go:177] copyRemoteCerts
	I1227 20:42:54.202994  512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 20:42:54.203044  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.221943  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.321588  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 20:42:54.321656  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 20:42:54.343016  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 20:42:54.343080  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 20:42:54.360298  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 20:42:54.360375  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 20:42:54.377941  512816 provision.go:87] duration metric: took 454.325341ms to configureAuth
	I1227 20:42:54.377969  512816 ubuntu.go:206] setting minikube options for container-runtime
	I1227 20:42:54.378138  512816 config.go:182] Loaded profile config "force-systemd-flag-875839": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:42:54.378151  512816 machine.go:97] duration metric: took 3.939607712s to provisionDockerMachine
	I1227 20:42:54.378158  512816 client.go:176] duration metric: took 9.675390037s to LocalClient.Create
	I1227 20:42:54.378178  512816 start.go:167] duration metric: took 9.675461349s to libmachine.API.Create "force-systemd-flag-875839"
	I1227 20:42:54.378187  512816 start.go:293] postStartSetup for "force-systemd-flag-875839" (driver="docker")
	I1227 20:42:54.378196  512816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 20:42:54.378248  512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 20:42:54.378289  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.394904  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.495529  512816 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 20:42:54.498962  512816 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 20:42:54.498995  512816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 20:42:54.499008  512816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/addons for local assets ...
	I1227 20:42:54.499064  512816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22332-300670/.minikube/files for local assets ...
	I1227 20:42:54.499159  512816 filesync.go:149] local asset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> 3025412.pem in /etc/ssl/certs
	I1227 20:42:54.499172  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> /etc/ssl/certs/3025412.pem
	I1227 20:42:54.499303  512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 20:42:54.507013  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /etc/ssl/certs/3025412.pem (1708 bytes)
	I1227 20:42:54.524308  512816 start.go:296] duration metric: took 146.106071ms for postStartSetup
	I1227 20:42:54.524674  512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
	I1227 20:42:54.541545  512816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/config.json ...
	I1227 20:42:54.541820  512816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:42:54.541868  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.558475  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.656227  512816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 20:42:54.660890  512816 start.go:128] duration metric: took 9.961854464s to createHost
	I1227 20:42:54.660916  512816 start.go:83] releasing machines lock for "force-systemd-flag-875839", held for 9.961983524s
	I1227 20:42:54.661038  512816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-875839
	I1227 20:42:54.678050  512816 ssh_runner.go:195] Run: cat /version.json
	I1227 20:42:54.678108  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.678353  512816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 20:42:54.678414  512816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-875839
	I1227 20:42:54.697205  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.698536  512816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33416 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/force-systemd-flag-875839/id_rsa Username:docker}
	I1227 20:42:54.790825  512816 ssh_runner.go:195] Run: systemctl --version
	I1227 20:42:54.890335  512816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 20:42:54.894628  512816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 20:42:54.894703  512816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 20:42:54.922183  512816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 20:42:54.922205  512816 start.go:496] detecting cgroup driver to use...
	I1227 20:42:54.922220  512816 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 20:42:54.922274  512816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 20:42:54.937492  512816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 20:42:54.950607  512816 docker.go:218] disabling cri-docker service (if available) ...
	I1227 20:42:54.950719  512816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 20:42:54.968539  512816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 20:42:54.987404  512816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 20:42:55.144395  512816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 20:42:55.270151  512816 docker.go:234] disabling docker service ...
	I1227 20:42:55.270245  512816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 20:42:55.293254  512816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 20:42:55.307641  512816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 20:42:55.428488  512816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 20:42:55.544420  512816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 20:42:55.556970  512816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 20:42:55.572425  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 20:42:55.581870  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 20:42:55.591038  512816 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 20:42:55.591152  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 20:42:55.600400  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:42:55.609307  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 20:42:55.618091  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 20:42:55.627102  512816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 20:42:55.635238  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 20:42:55.644259  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 20:42:55.653590  512816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 20:42:55.662844  512816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 20:42:55.670803  512816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 20:42:55.678906  512816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:42:55.792508  512816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 20:42:55.925141  512816 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 20:42:55.925261  512816 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 20:42:55.929276  512816 start.go:574] Will wait 60s for crictl version
	I1227 20:42:55.929388  512816 ssh_runner.go:195] Run: which crictl
	I1227 20:42:55.932931  512816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 20:42:55.957058  512816 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 20:42:55.957181  512816 ssh_runner.go:195] Run: containerd --version
	I1227 20:42:55.979962  512816 ssh_runner.go:195] Run: containerd --version
	I1227 20:42:56.007149  512816 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 20:42:56.010308  512816 cli_runner.go:164] Run: docker network inspect force-systemd-flag-875839 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 20:42:56.027937  512816 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 20:42:56.032126  512816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:42:56.043260  512816 kubeadm.go:884] updating cluster {Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 20:42:56.043408  512816 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 20:42:56.043480  512816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:42:56.072941  512816 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 20:42:56.072967  512816 containerd.go:542] Images already preloaded, skipping extraction
	I1227 20:42:56.073040  512816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 20:42:56.098189  512816 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 20:42:56.098216  512816 cache_images.go:86] Images are preloaded, skipping loading
	I1227 20:42:56.098225  512816 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1227 20:42:56.098317  512816 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-875839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 20:42:56.098386  512816 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 20:42:56.123756  512816 cni.go:84] Creating CNI manager for ""
	I1227 20:42:56.123781  512816 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 20:42:56.123798  512816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 20:42:56.123827  512816 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-875839 NodeName:force-systemd-flag-875839 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 20:42:56.123946  512816 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-875839"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 20:42:56.124019  512816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 20:42:56.133114  512816 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 20:42:56.133224  512816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 20:42:56.141401  512816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1227 20:42:56.157153  512816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 20:42:56.172136  512816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1227 20:42:56.185664  512816 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 20:42:56.189569  512816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 20:42:56.200285  512816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 20:42:56.310897  512816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 20:42:56.330461  512816 certs.go:69] Setting up /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839 for IP: 192.168.76.2
	I1227 20:42:56.330485  512816 certs.go:195] generating shared ca certs ...
	I1227 20:42:56.330501  512816 certs.go:227] acquiring lock for ca certs: {Name:mkf93c4b7b6f0a265527090e39bdf731f6a1491b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.330640  512816 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key
	I1227 20:42:56.330697  512816 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key
	I1227 20:42:56.330709  512816 certs.go:257] generating profile certs ...
	I1227 20:42:56.330767  512816 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key
	I1227 20:42:56.330784  512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt with IP's: []
	I1227 20:42:56.654113  512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt ...
	I1227 20:42:56.654148  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.crt: {Name:mk690272e7c9732b7460196a75d46ce521525785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.654393  512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key ...
	I1227 20:42:56.654411  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/client.key: {Name:mkc39b22fbff4b40897d4f98a3d62c6f55391f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.654517  512816 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1
	I1227 20:42:56.654538  512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 20:42:56.834765  512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 ...
	I1227 20:42:56.834804  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1: {Name:mkc9aaa28a12a38cdd436242cc98ebbe1035831f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.834991  512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1 ...
	I1227 20:42:56.835006  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1: {Name:mk3594c59348fecf67f0f33d24079612f39e8847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:56.835098  512816 certs.go:382] copying /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt.06feb0c1 -> /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt
	I1227 20:42:56.835196  512816 certs.go:386] copying /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key.06feb0c1 -> /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key
	I1227 20:42:56.835265  512816 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key
	I1227 20:42:56.835286  512816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt with IP's: []
	I1227 20:42:57.497782  512816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt ...
	I1227 20:42:57.497816  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt: {Name:mk6c13ddc40f97cd4770101e7d4b970e00fe21ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:57.498023  512816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key ...
	I1227 20:42:57.498038  512816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key: {Name:mk60c7e4a1d2a1da5fcd88dbfb787475edf7630f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 20:42:57.498129  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 20:42:57.498152  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 20:42:57.498166  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 20:42:57.498182  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 20:42:57.498197  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 20:42:57.498209  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 20:42:57.498226  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 20:42:57.498241  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 20:42:57.498302  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem (1338 bytes)
	W1227 20:42:57.498346  512816 certs.go:480] ignoring /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541_empty.pem, impossibly tiny 0 bytes
	I1227 20:42:57.498360  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 20:42:57.498388  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/ca.pem (1082 bytes)
	I1227 20:42:57.498416  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/cert.pem (1123 bytes)
	I1227 20:42:57.498445  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/key.pem (1679 bytes)
	I1227 20:42:57.498495  512816 certs.go:484] found cert: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem (1708 bytes)
	I1227 20:42:57.498530  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem -> /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.498552  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.498567  512816 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem -> /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.499202  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 20:42:57.518888  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1227 20:42:57.541602  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 20:42:57.560371  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 20:42:57.578574  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 20:42:57.596428  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 20:42:57.614059  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 20:42:57.631746  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/force-systemd-flag-875839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 20:42:57.649051  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/ssl/certs/3025412.pem --> /usr/share/ca-certificates/3025412.pem (1708 bytes)
	I1227 20:42:57.666899  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 20:42:57.685179  512816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22332-300670/.minikube/certs/302541.pem --> /usr/share/ca-certificates/302541.pem (1338 bytes)
	I1227 20:42:57.704185  512816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 20:42:57.717788  512816 ssh_runner.go:195] Run: openssl version
	I1227 20:42:57.724506  512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.732186  512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/302541.pem /etc/ssl/certs/302541.pem
	I1227 20:42:57.739774  512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.743710  512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 20:01 /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.743773  512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/302541.pem
	I1227 20:42:57.785241  512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 20:42:57.792974  512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/302541.pem /etc/ssl/certs/51391683.0
	I1227 20:42:57.800846  512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.810694  512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3025412.pem /etc/ssl/certs/3025412.pem
	I1227 20:42:57.819628  512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.825369  512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 20:01 /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.825452  512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3025412.pem
	I1227 20:42:57.867666  512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 20:42:57.875568  512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3025412.pem /etc/ssl/certs/3ec20f2e.0
	I1227 20:42:57.882973  512816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.890507  512816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 20:42:57.898264  512816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.902159  512816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 19:56 /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.902227  512816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 20:42:57.943302  512816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 20:42:57.950884  512816 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 20:42:57.958222  512816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 20:42:57.961793  512816 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 20:42:57.961846  512816 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-875839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-875839 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:42:57.961921  512816 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 20:42:57.961986  512816 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 20:42:57.992506  512816 cri.go:96] found id: ""
	I1227 20:42:57.992583  512816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 20:42:58.003253  512816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 20:42:58.011987  512816 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 20:42:58.012081  512816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 20:42:58.020896  512816 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 20:42:58.020916  512816 kubeadm.go:158] found existing configuration files:
	
	I1227 20:42:58.020969  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 20:42:58.030325  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 20:42:58.030399  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 20:42:58.039358  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 20:42:58.049599  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 20:42:58.049713  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 20:42:58.058561  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 20:42:58.068422  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 20:42:58.068537  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 20:42:58.077497  512816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 20:42:58.087253  512816 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 20:42:58.087372  512816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 20:42:58.096312  512816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 20:42:58.136339  512816 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 20:42:58.136633  512816 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 20:42:58.210092  512816 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 20:42:58.210244  512816 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 20:42:58.210334  512816 kubeadm.go:319] OS: Linux
	I1227 20:42:58.210426  512816 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 20:42:58.210510  512816 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 20:42:58.210589  512816 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 20:42:58.210671  512816 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 20:42:58.210755  512816 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 20:42:58.210837  512816 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 20:42:58.210918  512816 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 20:42:58.211026  512816 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 20:42:58.211119  512816 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 20:42:58.277645  512816 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 20:42:58.277833  512816 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 20:42:58.277971  512816 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 20:42:58.283796  512816 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 20:42:58.290858  512816 out.go:252]   - Generating certificates and keys ...
	I1227 20:42:58.291030  512816 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 20:42:58.291136  512816 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 20:42:58.557075  512816 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 20:42:58.748413  512816 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 20:42:58.793614  512816 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 20:42:59.304343  512816 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 20:42:59.833617  512816 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 20:42:59.834012  512816 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:43:00.429800  512816 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 20:43:00.430239  512816 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-875839 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 20:43:00.529822  512816 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 20:43:01.296650  512816 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 20:43:01.612939  512816 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 20:43:01.613240  512816 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 20:43:01.833117  512816 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 20:43:02.012700  512816 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 20:43:02.166458  512816 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 20:43:02.299475  512816 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 20:43:02.455123  512816 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 20:43:02.456053  512816 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 20:43:02.458808  512816 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 20:43:02.462661  512816 out.go:252]   - Booting up control plane ...
	I1227 20:43:02.462775  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 20:43:02.462871  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 20:43:02.462950  512816 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 20:43:02.480678  512816 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 20:43:02.481005  512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 20:43:02.489220  512816 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 20:43:02.489577  512816 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 20:43:02.489803  512816 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 20:43:02.667653  512816 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 20:43:02.667780  512816 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 20:43:42.180572  490122 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002384899s
	I1227 20:43:42.180603  490122 kubeadm.go:319] 
	I1227 20:43:42.180662  490122 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 20:43:42.180696  490122 kubeadm.go:319] 	- The kubelet is not running
	I1227 20:43:42.181245  490122 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 20:43:42.181273  490122 kubeadm.go:319] 
	I1227 20:43:42.181550  490122 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 20:43:42.181612  490122 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 20:43:42.181673  490122 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 20:43:42.181680  490122 kubeadm.go:319] 
	I1227 20:43:42.190437  490122 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 20:43:42.190957  490122 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 20:43:42.191079  490122 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 20:43:42.191447  490122 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 20:43:42.191488  490122 kubeadm.go:319] 
	I1227 20:43:42.191645  490122 kubeadm.go:403] duration metric: took 8m7.873865664s to StartCluster
	I1227 20:43:42.191691  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 20:43:42.191740  490122 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 20:43:42.191765  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 20:43:42.220088  490122 cri.go:96] found id: ""
	I1227 20:43:42.220132  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.220143  490122 logs.go:284] No container was found matching "kube-apiserver"
	I1227 20:43:42.220150  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 20:43:42.220226  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 20:43:42.250604  490122 cri.go:96] found id: ""
	I1227 20:43:42.250629  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.250638  490122 logs.go:284] No container was found matching "etcd"
	I1227 20:43:42.250645  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 20:43:42.250710  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 20:43:42.290197  490122 cri.go:96] found id: ""
	I1227 20:43:42.290221  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.290230  490122 logs.go:284] No container was found matching "coredns"
	I1227 20:43:42.290237  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 20:43:42.290298  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 20:43:42.319099  490122 cri.go:96] found id: ""
	I1227 20:43:42.319172  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.319244  490122 logs.go:284] No container was found matching "kube-scheduler"
	I1227 20:43:42.319264  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 20:43:42.319393  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 20:43:42.356871  490122 cri.go:96] found id: ""
	I1227 20:43:42.356896  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.356905  490122 logs.go:284] No container was found matching "kube-proxy"
	I1227 20:43:42.356912  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 20:43:42.356972  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 20:43:42.381385  490122 cri.go:96] found id: ""
	I1227 20:43:42.381410  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.381420  490122 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 20:43:42.381427  490122 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 20:43:42.381487  490122 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 20:43:42.405560  490122 cri.go:96] found id: ""
	I1227 20:43:42.405588  490122 logs.go:282] 0 containers: []
	W1227 20:43:42.405609  490122 logs.go:284] No container was found matching "kindnet"
	I1227 20:43:42.405638  490122 logs.go:123] Gathering logs for kubelet ...
	I1227 20:43:42.405658  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 20:43:42.465711  490122 logs.go:123] Gathering logs for dmesg ...
	I1227 20:43:42.465748  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 20:43:42.480491  490122 logs.go:123] Gathering logs for describe nodes ...
	I1227 20:43:42.480521  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 20:43:42.551679  490122 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:43:42.542984    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.543806    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.545464    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.545794    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.547357    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 20:43:42.542984    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.543806    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.545464    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.545794    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:42.547357    4833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 20:43:42.551707  490122 logs.go:123] Gathering logs for containerd ...
	I1227 20:43:42.551718  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 20:43:42.595963  490122 logs.go:123] Gathering logs for container status ...
	I1227 20:43:42.596039  490122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 20:43:42.628573  490122 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002384899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 20:43:42.628630  490122 out.go:285] * 
	W1227 20:43:42.628681  490122 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002384899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:43:42.628699  490122 out.go:285] * 
	W1227 20:43:42.628950  490122 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 20:43:42.634684  490122 out.go:203] 
	W1227 20:43:42.638442  490122 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002384899s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 20:43:42.638490  490122 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 20:43:42.638512  490122 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 20:43:42.641703  490122 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206740933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206756293Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206786586Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206800346Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206809979Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206820203Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206829450Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206840141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206851374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.206879928Z" level=info msg="Connect containerd service"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.207290457Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.207838111Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.224584582Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.224655910Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.224681912Z" level=info msg="Start subscribing containerd event"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.224736107Z" level=info msg="Start recovering state"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.259024368Z" level=info msg="Start event monitor"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.259257780Z" level=info msg="Start cni network conf syncer for default"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.259332062Z" level=info msg="Start streaming server"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.259391024Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.259450635Z" level=info msg="runtime interface starting up..."
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.259504657Z" level=info msg="starting plugins..."
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.259565458Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 20:35:32 force-systemd-env-857112 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 27 20:35:32 force-systemd-env-857112 containerd[757]: time="2025-12-27T20:35:32.261721555Z" level=info msg="containerd successfully booted in 0.075996s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 20:43:43.985439    4962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:43.986024    4962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:43.987853    4962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:43.988390    4962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 20:43:43.990105    4962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 19:54] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 20:43:44 up  2:26,  0 user,  load average: 0.91, 1.43, 1.93
	Linux force-systemd-env-857112 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 20:43:40 force-systemd-env-857112 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:43:41 force-systemd-env-857112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 27 20:43:41 force-systemd-env-857112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:43:41 force-systemd-env-857112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:43:41 force-systemd-env-857112 kubelet[4758]: E1227 20:43:41.578437    4758 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:43:41 force-systemd-env-857112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:43:41 force-systemd-env-857112 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:43:42 force-systemd-env-857112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 20:43:42 force-systemd-env-857112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:43:42 force-systemd-env-857112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:43:42 force-systemd-env-857112 kubelet[4786]: E1227 20:43:42.348366    4786 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:43:42 force-systemd-env-857112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:43:42 force-systemd-env-857112 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:43:43 force-systemd-env-857112 kubelet[4858]: E1227 20:43:43.080216    4858 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 20:43:43 force-systemd-env-857112 kubelet[4925]: E1227 20:43:43.835021    4925 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 20:43:43 force-systemd-env-857112 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-857112 -n force-systemd-env-857112
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-857112 -n force-systemd-env-857112: exit status 6 (303.546114ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:43:44.410812  515705 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-857112" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-857112" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-857112" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-857112
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-857112: (1.985686753s)
--- FAIL: TestForceSystemdEnv (506.03s)

                                                
                                    

Test pass (305/337)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.15
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.35.0/json-events 4.23
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.13
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.13
27 TestAddons/Setup 137.39
29 TestAddons/serial/Volcano 40.51
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.87
35 TestAddons/parallel/Registry 15.62
36 TestAddons/parallel/RegistryCreds 0.77
37 TestAddons/parallel/Ingress 19.18
38 TestAddons/parallel/InspektorGadget 11.78
39 TestAddons/parallel/MetricsServer 5.76
41 TestAddons/parallel/CSI 43.01
42 TestAddons/parallel/Headlamp 16
43 TestAddons/parallel/CloudSpanner 5.61
44 TestAddons/parallel/LocalPath 8.9
45 TestAddons/parallel/NvidiaDevicePlugin 5.6
46 TestAddons/parallel/Yakd 11.86
48 TestAddons/StoppedEnableDisable 12.39
49 TestCertOptions 29.53
50 TestCertExpiration 215.19
54 TestDockerEnvContainerd 43.37
58 TestErrorSpam/setup 26.17
59 TestErrorSpam/start 0.82
60 TestErrorSpam/status 1.17
61 TestErrorSpam/pause 1.77
62 TestErrorSpam/unpause 1.93
63 TestErrorSpam/stop 1.64
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 45.61
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.97
70 TestFunctional/serial/KubeContext 0.13
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
75 TestFunctional/serial/CacheCmd/cache/add_local 1.2
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 47.81
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.5
86 TestFunctional/serial/LogsFileCmd 1.53
87 TestFunctional/serial/InvalidService 4.67
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 9.35
91 TestFunctional/parallel/DryRun 0.55
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.06
97 TestFunctional/parallel/ServiceCmdConnect 8.89
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 20.95
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.43
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.25
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
113 TestFunctional/parallel/License 0.3
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.48
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
129 TestFunctional/parallel/MountCmd/any-port 8.35
130 TestFunctional/parallel/ServiceCmd/List 0.53
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
133 TestFunctional/parallel/ServiceCmd/Format 0.4
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/MountCmd/specific-port 2.14
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.18
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.36
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.88
144 TestFunctional/parallel/ImageCommands/Setup 0.63
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.24
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 172.96
163 TestMultiControlPlane/serial/DeployApp 7.04
164 TestMultiControlPlane/serial/PingHostFromPods 1.66
165 TestMultiControlPlane/serial/AddWorkerNode 30.11
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
168 TestMultiControlPlane/serial/CopyFile 19.87
169 TestMultiControlPlane/serial/StopSecondaryNode 12.95
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
171 TestMultiControlPlane/serial/RestartSecondaryNode 12.88
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.11
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.21
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.59
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.07
176 TestMultiControlPlane/serial/StopCluster 36.43
177 TestMultiControlPlane/serial/RestartCluster 59.28
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 50.17
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.13
185 TestJSONOutput/start/Command 46.6
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.73
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.63
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.05
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 34.72
211 TestKicCustomNetwork/use_default_bridge_network 30.26
212 TestKicExistingNetwork 30.9
213 TestKicCustomSubnet 29.39
214 TestKicStaticIP 30.74
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 62.9
219 TestMountStart/serial/StartWithMountFirst 8.72
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 8.24
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.96
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 76.57
231 TestMultiNode/serial/DeployApp2Nodes 4.5
232 TestMultiNode/serial/PingHostFrom2Pods 1.02
233 TestMultiNode/serial/AddNode 30.09
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.54
237 TestMultiNode/serial/StopNode 2.43
238 TestMultiNode/serial/StartAfterStop 7.81
239 TestMultiNode/serial/RestartKeepsNodes 74.47
240 TestMultiNode/serial/DeleteNode 5.62
241 TestMultiNode/serial/StopMultiNode 24.2
242 TestMultiNode/serial/RestartMultiNode 49.39
243 TestMultiNode/serial/ValidateNameConflict 29.73
250 TestScheduledStopUnix 103.9
253 TestInsufficientStorage 12.57
254 TestRunningBinaryUpgrade 319.34
256 TestKubernetesUpgrade 330.85
257 TestMissingContainerUpgrade 129.72
259 TestPause/serial/Start 56.41
260 TestPause/serial/SecondStartNoReconfiguration 8.15
261 TestPause/serial/Pause 1.07
262 TestPause/serial/VerifyStatus 0.32
263 TestPause/serial/Unpause 0.62
264 TestPause/serial/PauseAgain 0.86
265 TestPause/serial/DeletePaused 2.41
266 TestPause/serial/VerifyDeletedResources 0.15
267 TestStoppedBinaryUpgrade/Setup 0.9
268 TestStoppedBinaryUpgrade/Upgrade 305.56
269 TestStoppedBinaryUpgrade/MinikubeLogs 2.34
277 TestPreload/Start-NoPreload-PullImage 65.74
278 TestPreload/Restart-With-Preload-Check-User-Image 47.18
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
282 TestNoKubernetes/serial/StartWithK8s 28.29
283 TestNoKubernetes/serial/StartWithStopK8s 6.79
284 TestNoKubernetes/serial/Start 7.44
285 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
287 TestNoKubernetes/serial/ProfileList 0.99
288 TestNoKubernetes/serial/Stop 1.29
289 TestNoKubernetes/serial/StartNoArgs 6.58
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
298 TestNetworkPlugins/group/false 3.65
303 TestStartStop/group/old-k8s-version/serial/FirstStart 60.49
304 TestStartStop/group/old-k8s-version/serial/DeployApp 10.43
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
306 TestStartStop/group/old-k8s-version/serial/Stop 12.1
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/old-k8s-version/serial/SecondStart 47.35
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
312 TestStartStop/group/old-k8s-version/serial/Pause 3.14
314 TestStartStop/group/no-preload/serial/FirstStart 52.86
315 TestStartStop/group/no-preload/serial/DeployApp 9.35
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
317 TestStartStop/group/no-preload/serial/Stop 12.11
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
319 TestStartStop/group/no-preload/serial/SecondStart 49.58
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
323 TestStartStop/group/no-preload/serial/Pause 3.02
325 TestStartStop/group/embed-certs/serial/FirstStart 44.39
326 TestStartStop/group/embed-certs/serial/DeployApp 10.34
327 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
328 TestStartStop/group/embed-certs/serial/Stop 12.15
329 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
330 TestStartStop/group/embed-certs/serial/SecondStart 52.73
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
333 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.02
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
336 TestStartStop/group/embed-certs/serial/Pause 4.25
338 TestStartStop/group/newest-cni/serial/FirstStart 34.71
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.57
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
342 TestStartStop/group/newest-cni/serial/Stop 1.45
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
344 TestStartStop/group/newest-cni/serial/SecondStart 14.32
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.46
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.56
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
350 TestStartStop/group/newest-cni/serial/Pause 2.97
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.87
353 TestPreload/PreloadSrc/gcs 5.98
354 TestPreload/PreloadSrc/github 8.07
355 TestPreload/PreloadSrc/gcs-cached 0.76
356 TestNetworkPlugins/group/auto/Start 47.62
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.18
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
360 TestNetworkPlugins/group/auto/KubeletFlags 0.46
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.96
362 TestNetworkPlugins/group/auto/NetCatPod 11.39
363 TestNetworkPlugins/group/kindnet/Start 52.15
364 TestNetworkPlugins/group/auto/DNS 0.35
365 TestNetworkPlugins/group/auto/Localhost 0.18
366 TestNetworkPlugins/group/auto/HairPin 0.17
367 TestNetworkPlugins/group/calico/Start 59.45
368 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
369 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
370 TestNetworkPlugins/group/kindnet/NetCatPod 10.39
371 TestNetworkPlugins/group/kindnet/DNS 0.27
372 TestNetworkPlugins/group/kindnet/Localhost 0.2
373 TestNetworkPlugins/group/kindnet/HairPin 0.18
374 TestNetworkPlugins/group/calico/ControllerPod 6.01
375 TestNetworkPlugins/group/custom-flannel/Start 55.6
376 TestNetworkPlugins/group/calico/KubeletFlags 0.43
377 TestNetworkPlugins/group/calico/NetCatPod 11.36
378 TestNetworkPlugins/group/calico/DNS 0.23
379 TestNetworkPlugins/group/calico/Localhost 0.19
380 TestNetworkPlugins/group/calico/HairPin 0.21
381 TestNetworkPlugins/group/enable-default-cni/Start 66.69
382 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
383 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.33
384 TestNetworkPlugins/group/custom-flannel/DNS 0.23
385 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
386 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
387 TestNetworkPlugins/group/flannel/Start 54.62
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.32
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
393 TestNetworkPlugins/group/bridge/Start 72.63
394 TestNetworkPlugins/group/flannel/ControllerPod 6.01
395 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
396 TestNetworkPlugins/group/flannel/NetCatPod 10.39
397 TestNetworkPlugins/group/flannel/DNS 0.32
398 TestNetworkPlugins/group/flannel/Localhost 0.2
399 TestNetworkPlugins/group/flannel/HairPin 0.19
400 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
401 TestNetworkPlugins/group/bridge/NetCatPod 10.27
402 TestNetworkPlugins/group/bridge/DNS 0.18
403 TestNetworkPlugins/group/bridge/Localhost 0.14
404 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (5.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-964397 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-964397 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.146591685s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 19:55:27.667031  302541 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1227 19:55:27.667107  302541 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-964397
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-964397: exit status 85 (87.326662ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-964397 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-964397 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:22.568307  302547 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:22.568431  302547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:22.568442  302547 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:22.568448  302547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:22.568716  302547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	W1227 19:55:22.568855  302547 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22332-300670/.minikube/config/config.json: open /home/jenkins/minikube-integration/22332-300670/.minikube/config/config.json: no such file or directory
	I1227 19:55:22.569293  302547 out.go:368] Setting JSON to true
	I1227 19:55:22.570089  302547 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5874,"bootTime":1766859449,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 19:55:22.570162  302547 start.go:143] virtualization:  
	I1227 19:55:22.575616  302547 out.go:99] [download-only-964397] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1227 19:55:22.575826  302547 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 19:55:22.575910  302547 notify.go:221] Checking for updates...
	I1227 19:55:22.579141  302547 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:22.582541  302547 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:22.585857  302547 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 19:55:22.589108  302547 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 19:55:22.592238  302547 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 19:55:22.598458  302547 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:55:22.598731  302547 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:22.623327  302547 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 19:55:22.623490  302547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:22.689814  302547 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 19:55:22.680742619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:22.689923  302547 docker.go:319] overlay module found
	I1227 19:55:22.692989  302547 out.go:99] Using the docker driver based on user configuration
	I1227 19:55:22.693028  302547 start.go:309] selected driver: docker
	I1227 19:55:22.693035  302547 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:22.693149  302547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:22.751094  302547 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 19:55:22.741842269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:22.751277  302547 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:22.751564  302547 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 19:55:22.751720  302547 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:55:22.755023  302547 out.go:171] Using Docker driver with root privileges
	I1227 19:55:22.758119  302547 cni.go:84] Creating CNI manager for ""
	I1227 19:55:22.758189  302547 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 19:55:22.758205  302547 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 19:55:22.758286  302547 start.go:353] cluster config:
	{Name:download-only-964397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-964397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 19:55:22.761478  302547 out.go:99] Starting "download-only-964397" primary control-plane node in "download-only-964397" cluster
	I1227 19:55:22.761501  302547 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 19:55:22.764469  302547 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 19:55:22.764510  302547 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 19:55:22.764673  302547 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 19:55:22.780380  302547 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:22.780585  302547 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 19:55:22.780690  302547 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 19:55:22.829334  302547 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 19:55:22.829371  302547 cache.go:65] Caching tarball of preloaded images
	I1227 19:55:22.829557  302547 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 19:55:22.832995  302547 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 19:55:22.833030  302547 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 19:55:22.833038  302547 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1227 19:55:22.914430  302547 preload.go:313] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1227 19:55:22.914606  302547 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 19:55:26.292253  302547 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1227 19:55:26.292659  302547 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/download-only-964397/config.json ...
	I1227 19:55:26.292694  302547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/download-only-964397/config.json: {Name:mk954521b024a4f7f4d8c82b719e8804aaac9535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 19:55:26.292891  302547 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 19:55:26.293089  302547 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22332-300670/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-964397 host does not exist
	  To start a cluster, run: "minikube start -p download-only-964397"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-964397
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (4.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-821306 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-821306 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.233846119s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (4.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 19:55:32.356002  302541 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 19:55:32.356040  302541 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-821306
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-821306: exit status 85 (88.022801ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-964397 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-964397 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ delete  │ -p download-only-964397                                                                                                                                                               │ download-only-964397 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │ 27 Dec 25 19:55 UTC │
	│ start   │ -o=json --download-only -p download-only-821306 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-821306 │ jenkins │ v1.37.0 │ 27 Dec 25 19:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 19:55:28
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 19:55:28.163362  302749 out.go:360] Setting OutFile to fd 1 ...
	I1227 19:55:28.163481  302749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:28.163493  302749 out.go:374] Setting ErrFile to fd 2...
	I1227 19:55:28.163498  302749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 19:55:28.163753  302749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 19:55:28.164154  302749 out.go:368] Setting JSON to true
	I1227 19:55:28.164933  302749 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5880,"bootTime":1766859449,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 19:55:28.165002  302749 start.go:143] virtualization:  
	I1227 19:55:28.168466  302749 out.go:99] [download-only-821306] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 19:55:28.168705  302749 notify.go:221] Checking for updates...
	I1227 19:55:28.171676  302749 out.go:171] MINIKUBE_LOCATION=22332
	I1227 19:55:28.175062  302749 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 19:55:28.178018  302749 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 19:55:28.181073  302749 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 19:55:28.184089  302749 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 19:55:28.189838  302749 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 19:55:28.190147  302749 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 19:55:28.226612  302749 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 19:55:28.227013  302749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:28.282094  302749 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 19:55:28.272556574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:28.282215  302749 docker.go:319] overlay module found
	I1227 19:55:28.285266  302749 out.go:99] Using the docker driver based on user configuration
	I1227 19:55:28.285311  302749 start.go:309] selected driver: docker
	I1227 19:55:28.285319  302749 start.go:928] validating driver "docker" against <nil>
	I1227 19:55:28.285438  302749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 19:55:28.341643  302749 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 19:55:28.332581824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 19:55:28.341795  302749 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 19:55:28.342112  302749 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 19:55:28.342275  302749 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 19:55:28.345417  302749 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-821306 host does not exist
	  To start a cluster, run: "minikube start -p download-only-821306"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-821306
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 19:55:33.525303  302541 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-831550 --alsologtostderr --binary-mirror http://127.0.0.1:43931 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-831550" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-831550
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.13s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-829359
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-829359: exit status 85 (130.800867ms)

                                                
                                                
-- stdout --
	* Profile "addons-829359" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-829359"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.13s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.13s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-829359
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-829359: exit status 85 (131.317796ms)

                                                
                                                
-- stdout --
	* Profile "addons-829359" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-829359"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.13s)

                                                
                                    
x
+
TestAddons/Setup (137.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-829359 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-829359 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m17.391432908s)
--- PASS: TestAddons/Setup (137.39s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.51s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 42.955789ms
addons_test.go:886: volcano-controller stabilized in 43.863499ms
addons_test.go:870: volcano-scheduler stabilized in 44.294748ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-6c7b5cd66b-647vn" [4264884a-7c45-4fc2-a882-ad85dbec9a56] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003094757s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-7f4844c49c-ttrpm" [c3cc2c2f-cdd0-4cc1-bc18-a7fbe4010be7] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003448339s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-8f57bcd69-d9dd7" [ae549027-0fdd-4fc7-9f6c-262bf05f5f25] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003545061s
addons_test.go:905: (dbg) Run:  kubectl --context addons-829359 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-829359 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-829359 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [78adeb8a-26ed-4cb2-af43-8ee17e586df6] Pending
helpers_test.go:353: "test-job-nginx-0" [78adeb8a-26ed-4cb2-af43-8ee17e586df6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [78adeb8a-26ed-4cb2-af43-8ee17e586df6] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.002827324s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-829359 addons disable volcano --alsologtostderr -v=1: (11.901384578s)
--- PASS: TestAddons/serial/Volcano (40.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-829359 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-829359 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-829359 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-829359 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b8773634-63a6-4e89-92d2-7d58b00824eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b8773634-63a6-4e89-92d2-7d58b00824eb] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003208349s
addons_test.go:696: (dbg) Run:  kubectl --context addons-829359 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-829359 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-829359 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-829359 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 5.739835ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-k4g2h" [6cd06e12-a099-4c52-a4cf-d90e8c3e4269] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00289096s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-c6hk8" [e5da6d91-f2ce-403a-9c78-636ecd0b29c0] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003514384s
addons_test.go:394: (dbg) Run:  kubectl --context addons-829359 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-829359 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-829359 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.503196242s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 ip
2025/12/27 19:59:07 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.62s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.878006ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-829359
addons_test.go:334: (dbg) Run:  kubectl --context addons-829359 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-829359 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-829359 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-829359 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [b33d8497-fce2-40f3-87e1-8552214dc1a2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [b33d8497-fce2-40f3-87e1-8552214dc1a2] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.003634389s
I1227 19:59:59.921927  302541 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-829359 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Done: kubectl --context addons-829359 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.114090236s)
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-829359 addons disable ingress-dns --alsologtostderr -v=1: (1.880697283s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-829359 addons disable ingress --alsologtostderr -v=1: (7.866028122s)
--- PASS: TestAddons/parallel/Ingress (19.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-4nf82" [a05e0fc5-c24d-42ed-b49f-d50ff6faae98] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00390934s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-829359 addons disable inspektor-gadget --alsologtostderr -v=1: (5.771117788s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.983057ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-zjhfn" [c87f36dd-ad39-4e48-8f3c-2bbfca07428b] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003562971s
addons_test.go:465: (dbg) Run:  kubectl --context addons-829359 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 19:59:04.557802  302541 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 19:59:04.562080  302541 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 19:59:04.562109  302541 kapi.go:107] duration metric: took 7.118517ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.129397ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-829359 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-829359 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [9cac7fb8-e611-44b1-b272-755f2d66da8d] Pending
helpers_test.go:353: "task-pv-pod" [9cac7fb8-e611-44b1-b272-755f2d66da8d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [9cac7fb8-e611-44b1-b272-755f2d66da8d] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.00619202s
addons_test.go:574: (dbg) Run:  kubectl --context addons-829359 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-829359 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-829359 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-829359 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-829359 delete pod task-pv-pod: (1.107949279s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-829359 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-829359 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-829359 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [f336598f-c56b-4488-af71-2b81037c8999] Pending
helpers_test.go:353: "task-pv-pod-restore" [f336598f-c56b-4488-af71-2b81037c8999] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [f336598f-c56b-4488-af71-2b81037c8999] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004685882s
addons_test.go:616: (dbg) Run:  kubectl --context addons-829359 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-829359 delete pod task-pv-pod-restore: (1.314380879s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-829359 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-829359 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-829359 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.905448131s)
--- PASS: TestAddons/parallel/CSI (43.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-829359 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-829359 --alsologtostderr -v=1: (1.108459354s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-mk5tx" [1e501b0c-4d0e-4363-9390-509dc2257dd8] Pending
helpers_test.go:353: "headlamp-6d8d595f-mk5tx" [1e501b0c-4d0e-4363-9390-509dc2257dd8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-mk5tx" [1e501b0c-4d0e-4363-9390-509dc2257dd8] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.005112411s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-829359 addons disable headlamp --alsologtostderr -v=1: (5.888734663s)
--- PASS: TestAddons/parallel/Headlamp (16.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-d5kwq" [ffc2e544-a906-4f29-bc6e-8af6133f2f8f] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009355717s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-829359 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-829359 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-829359 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [07399531-e981-4bb5-89c8-421d33d39d27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [07399531-e981-4bb5-89c8-421d33d39d27] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [07399531-e981-4bb5-89c8-421d33d39d27] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.094437167s
addons_test.go:969: (dbg) Run:  kubectl --context addons-829359 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 ssh "cat /opt/local-path-provisioner/pvc-9edfb89f-1fd9-4f99-a9b2-37ab17a7fb40_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-829359 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-829359 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-kzvdg" [c9dcebdd-2c58-48d5-9ba4-712a164910ad] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004620897s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-865bfb49b9-xd5fh" [0b25743f-ab79-4627-9383-20c38c17b35e] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004247526s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-829359 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-829359 addons disable yakd --alsologtostderr -v=1: (5.853220162s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-829359
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-829359: (12.113075454s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-829359
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-829359
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-829359
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (29.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-323659 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1227 20:43:50.191659  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-323659 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (26.768894902s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-323659 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-323659 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-323659 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-323659" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-323659
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-323659: (2.033654741s)
--- PASS: TestCertOptions (29.53s)

                                                
                                    
x
+
TestCertExpiration (215.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-794518 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-794518 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.831549044s)
E1227 20:41:53.240649  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-794518 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-794518 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.915320693s)
helpers_test.go:176: Cleaning up "cert-expiration-794518" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-794518
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-794518: (2.446339582s)
--- PASS: TestCertExpiration (215.19s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.37s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-767572 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-767572 --driver=docker  --container-runtime=containerd: (27.798609909s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-767572"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-767572": (1.147593208s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-o7fvLKIYSDxu/agent.321388" SSH_AGENT_PID="321389" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-o7fvLKIYSDxu/agent.321388" SSH_AGENT_PID="321389" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-o7fvLKIYSDxu/agent.321388" SSH_AGENT_PID="321389" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.474089342s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-o7fvLKIYSDxu/agent.321388" SSH_AGENT_PID="321389" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-767572" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-767572
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-767572: (2.064402051s)
--- PASS: TestDockerEnvContainerd (43.37s)

                                                
                                    
x
+
TestErrorSpam/setup (26.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-853064 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-853064 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-853064 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-853064 --driver=docker  --container-runtime=containerd: (26.174137865s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (26.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 stop: (1.428021512s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-853064 --log_dir /tmp/nospam-853064 stop
--- PASS: TestErrorSpam/stop (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22332-300670/.minikube/files/etc/test/nested/copy/302541/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-698656 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-698656 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (45.60999694s)
--- PASS: TestFunctional/serial/StartWithProxy (45.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 20:02:38.159670  302541 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-698656 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-698656 --alsologtostderr -v=8: (6.961472163s)
functional_test.go:678: soft start took 6.965662352s for "functional-698656" cluster.
I1227 20:02:45.121514  302541 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (6.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-698656 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 cache add registry.k8s.io/pause:3.1: (1.314303893s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 cache add registry.k8s.io/pause:3.3: (1.142096259s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 cache add registry.k8s.io/pause:latest: (1.057704212s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-698656 /tmp/TestFunctionalserialCacheCmdcacheadd_local3971395882/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cache add minikube-local-cache-test:functional-698656
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cache delete minikube-local-cache-test:functional-698656
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-698656
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.786186ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cache reload
E1227 20:02:51.831314  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:51.837343  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:51.847623  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:51.867890  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:51.908234  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:51.988548  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1227 20:02:52.149385  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
E1227 20:02:52.470150  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 kubectl -- --context functional-698656 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-698656 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-698656 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1227 20:02:53.111291  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:54.392472  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:02:56.952707  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:03:02.073436  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:03:12.314521  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:03:32.795721  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-698656 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.800841789s)
functional_test.go:776: restart took 47.800938686s for "functional-698656" cluster.
I1227 20:03:40.575083  302541 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (47.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-698656 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 logs: (1.502355454s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 logs --file /tmp/TestFunctionalserialLogsFileCmd276015779/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 logs --file /tmp/TestFunctionalserialLogsFileCmd276015779/001/logs.txt: (1.529514866s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-698656 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-698656
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-698656: exit status 115 (758.255544ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30454 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-698656 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 config get cpus: exit status 14 (67.480054ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 config get cpus: exit status 14 (69.935766ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-698656 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-698656 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 336817: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-698656 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-698656 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (191.739977ms)

                                                
                                                
-- stdout --
	* [functional-698656] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:04:19.831980  336215 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:04:19.832252  336215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:04:19.832285  336215 out.go:374] Setting ErrFile to fd 2...
	I1227 20:04:19.832322  336215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:04:19.832603  336215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:04:19.833013  336215 out.go:368] Setting JSON to false
	I1227 20:04:19.835003  336215 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6411,"bootTime":1766859449,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 20:04:19.835111  336215 start.go:143] virtualization:  
	I1227 20:04:19.840369  336215 out.go:179] * [functional-698656] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:04:19.843449  336215 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:04:19.843565  336215 notify.go:221] Checking for updates...
	I1227 20:04:19.849274  336215 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:04:19.852277  336215 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:04:19.855054  336215 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 20:04:19.857793  336215 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:04:19.860696  336215 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:04:19.864031  336215 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:04:19.864737  336215 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:04:19.888061  336215 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:04:19.888171  336215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:04:19.950859  336215 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:04:19.940657559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:04:19.950968  336215 docker.go:319] overlay module found
	I1227 20:04:19.953967  336215 out.go:179] * Using the docker driver based on existing profile
	I1227 20:04:19.956787  336215 start.go:309] selected driver: docker
	I1227 20:04:19.956809  336215 start.go:928] validating driver "docker" against &{Name:functional-698656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-698656 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:04:19.956928  336215 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:04:19.960552  336215 out.go:203] 
	W1227 20:04:19.963562  336215 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 20:04:19.966436  336215 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-698656 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-698656 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-698656 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (222.83378ms)

                                                
                                                
-- stdout --
	* [functional-698656] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:04:19.622296  336168 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:04:19.622492  336168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:04:19.622527  336168 out.go:374] Setting ErrFile to fd 2...
	I1227 20:04:19.622548  336168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:04:19.623616  336168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:04:19.624085  336168 out.go:368] Setting JSON to false
	I1227 20:04:19.625118  336168 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6411,"bootTime":1766859449,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 20:04:19.625268  336168 start.go:143] virtualization:  
	I1227 20:04:19.628694  336168 out.go:179] * [functional-698656] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1227 20:04:19.632556  336168 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:04:19.632691  336168 notify.go:221] Checking for updates...
	I1227 20:04:19.638609  336168 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:04:19.641430  336168 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:04:19.644268  336168 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 20:04:19.647420  336168 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:04:19.650495  336168 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:04:19.655598  336168 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:04:19.657264  336168 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:04:19.699706  336168 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:04:19.699849  336168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:04:19.759453  336168 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 20:04:19.749325296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:04:19.759569  336168 docker.go:319] overlay module found
	I1227 20:04:19.762994  336168 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 20:04:19.765693  336168 start.go:309] selected driver: docker
	I1227 20:04:19.765713  336168 start.go:928] validating driver "docker" against &{Name:functional-698656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-698656 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 20:04:19.765831  336168 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:04:19.769295  336168 out.go:203] 
	W1227 20:04:19.772184  336168 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 20:04:19.775091  336168 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-698656 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-698656 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-qdq49" [d172b58e-e4d0-4258-b089-20c2cd353904] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-qdq49" [d172b58e-e4d0-4258-b089-20c2cd353904] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003426946s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30410
functional_test.go:1685: http://192.168.49.2:30410: success! body:
Request served by hello-node-connect-5d95464fd4-qdq49

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30410
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [91086abd-1a36-4201-b381-fb8ee00f8e6f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00460769s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-698656 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-698656 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-698656 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-698656 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7eca8008-5a2c-4916-8b3a-f28283b0b9b9] Pending
helpers_test.go:353: "sp-pod" [7eca8008-5a2c-4916-8b3a-f28283b0b9b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [7eca8008-5a2c-4916-8b3a-f28283b0b9b9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005406493s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-698656 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-698656 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-698656 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [87e51da6-801c-43c0-b9ae-db055414be55] Pending
helpers_test.go:353: "sp-pod" [87e51da6-801c-43c0-b9ae-db055414be55] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003660973s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-698656 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh -n functional-698656 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cp functional-698656:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1716120609/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh -n functional-698656 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh -n functional-698656 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/302541/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo cat /etc/test/nested/copy/302541/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/302541.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo cat /etc/ssl/certs/302541.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/302541.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo cat /usr/share/ca-certificates/302541.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3025412.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo cat /etc/ssl/certs/3025412.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/3025412.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo cat /usr/share/ca-certificates/3025412.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-698656 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 ssh "sudo systemctl is-active docker": exit status 1 (365.328845ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 ssh "sudo systemctl is-active crio": exit status 1 (381.885866ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-698656 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-698656 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-698656 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 333504: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-698656 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-698656 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-698656 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [887e3239-8c0d-4e08-b673-b28524b27e7d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [887e3239-8c0d-4e08-b673-b28524b27e7d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004040447s
I1227 20:03:59.663338  302541 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-698656 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.17.125 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-698656 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-698656 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-698656 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-t9td9" [61e93a01-c537-4177-a7d9-6ef5ec3db511] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-t9td9" [61e93a01-c537-4177-a7d9-6ef5ec3db511] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005359431s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "384.071328ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "58.897303ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "393.329559ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "53.378287ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdany-port4031617164/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766865853094974327" to /tmp/TestFunctionalparallelMountCmdany-port4031617164/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766865853094974327" to /tmp/TestFunctionalparallelMountCmdany-port4031617164/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766865853094974327" to /tmp/TestFunctionalparallelMountCmdany-port4031617164/001/test-1766865853094974327
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.531772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:04:13.440690  302541 retry.go:84] will retry after 500ms: exit status 1
E1227 20:04:13.755958  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 20:04 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 20:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 20:04 test-1766865853094974327
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh cat /mount-9p/test-1766865853094974327
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-698656 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [df3c7dc1-b3bf-41fe-9117-7d28195095db] Pending
helpers_test.go:353: "busybox-mount" [df3c7dc1-b3bf-41fe-9117-7d28195095db] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [df3c7dc1-b3bf-41fe-9117-7d28195095db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [df3c7dc1-b3bf-41fe-9117-7d28195095db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007985013s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-698656 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdany-port4031617164/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 service list -o json
functional_test.go:1509: Took "531.58694ms" to run "out/minikube-linux-arm64 -p functional-698656 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:32684
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:32684
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdspecific-port2919862426/001:/mount-9p --alsologtostderr -v=1 --port 44291]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (583.005212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 20:04:22.030525  302541 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdspecific-port2919862426/001:/mount-9p --alsologtostderr -v=1 --port 44291] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 ssh "sudo umount -f /mount-9p": exit status 1 (322.08022ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-698656 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdspecific-port2919862426/001:/mount-9p --alsologtostderr -v=1 --port 44291] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2400529111/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2400529111/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2400529111/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T" /mount1: exit status 1 (856.649027ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-698656 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2400529111/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2400529111/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-698656 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2400529111/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 version -o=json --components: (1.359586371s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-698656 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-698656
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-698656 image ls --format short --alsologtostderr:
I1227 20:04:34.247521  339237 out.go:360] Setting OutFile to fd 1 ...
I1227 20:04:34.247643  339237 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:34.247656  339237 out.go:374] Setting ErrFile to fd 2...
I1227 20:04:34.247675  339237 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:34.247978  339237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
I1227 20:04:34.248676  339237 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:34.248853  339237 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:34.249447  339237 cli_runner.go:164] Run: docker container inspect functional-698656 --format={{.State.Status}}
I1227 20:04:34.281855  339237 ssh_runner.go:195] Run: systemctl --version
I1227 20:04:34.281915  339237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-698656
I1227 20:04:34.324874  339237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/functional-698656/id_rsa Username:docker}
I1227 20:04:34.430711  339237 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-698656 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ sha256:ddc842 │ 15.4MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ sha256:962dbb │ 23MB   │
│ registry.k8s.io/pause                             │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                             │ latest                                │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-698656                     │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ docker.io/library/minikube-local-cache-test       │ functional-698656                     │ sha256:630402 │ 990B   │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ sha256:c3fcf2 │ 24.7MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ sha256:88898f │ 20.7MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ sha256:de369f │ 22.4MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-698656 image ls --format table --alsologtostderr:
I1227 20:04:34.582842  339311 out.go:360] Setting OutFile to fd 1 ...
I1227 20:04:34.584322  339311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:34.584344  339311 out.go:374] Setting ErrFile to fd 2...
I1227 20:04:34.584356  339311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:34.584712  339311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
I1227 20:04:34.585509  339311 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:34.585693  339311 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:34.586397  339311 cli_runner.go:164] Run: docker container inspect functional-698656 --format={{.State.Status}}
I1227 20:04:34.621976  339311 ssh_runner.go:195] Run: systemctl --version
I1227 20:04:34.622036  339311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-698656
I1227 20:04:34.649927  339311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/functional-698656/id_rsa Username:docker}
I1227 20:04:34.759665  339311 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-698656 image ls --format json --alsologtostderr:
[{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"},{"id":"sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aaf
a471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"20672243"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{
"id":"sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22987510"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c472
4a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656"],"size":"2173567"},{"id":"sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"22432091"},{"id":"sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f"],"repoT
ags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"15405198"},{"id":"sha256:630402e26a10276b0011a1183a31f1331473c3d751e7cb4531e59b72d0069edd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-698656"],"size":"990"},{"id":"sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"24692295"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-698656 image ls --format json --alsologtostderr:
I1227 20:04:34.537196  339305 out.go:360] Setting OutFile to fd 1 ...
I1227 20:04:34.537409  339305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:34.537436  339305 out.go:374] Setting ErrFile to fd 2...
I1227 20:04:34.537455  339305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:34.537729  339305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
I1227 20:04:34.538539  339305 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:34.538699  339305 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:34.539302  339305 cli_runner.go:164] Run: docker container inspect functional-698656 --format={{.State.Status}}
I1227 20:04:34.560735  339305 ssh_runner.go:195] Run: systemctl --version
I1227 20:04:34.560792  339305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-698656
I1227 20:04:34.590103  339305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/functional-698656/id_rsa Username:docker}
I1227 20:04:34.698801  339305 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-698656 image ls --format yaml --alsologtostderr:
- id: sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "20672243"
- id: sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "22432091"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656
size: "2173567"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "24692295"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22987510"
- id: sha256:630402e26a10276b0011a1183a31f1331473c3d751e7cb4531e59b72d0069edd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-698656
size: "990"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "15405198"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-698656 image ls --format yaml --alsologtostderr:
I1227 20:04:34.243770  339232 out.go:360] Setting OutFile to fd 1 ...
I1227 20:04:34.244047  339232 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:34.244086  339232 out.go:374] Setting ErrFile to fd 2...
I1227 20:04:34.244110  339232 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:34.244519  339232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
I1227 20:04:34.245923  339232 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:34.246131  339232 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:34.246738  339232 cli_runner.go:164] Run: docker container inspect functional-698656 --format={{.State.Status}}
I1227 20:04:34.278003  339232 ssh_runner.go:195] Run: systemctl --version
I1227 20:04:34.278060  339232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-698656
I1227 20:04:34.312012  339232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/functional-698656/id_rsa Username:docker}
I1227 20:04:34.414163  339232 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-698656 ssh pgrep buildkitd: exit status 1 (314.106542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image build -t localhost/my-image:functional-698656 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 image build -t localhost/my-image:functional-698656 testdata/build --alsologtostderr: (3.33756257s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-698656 image build -t localhost/my-image:functional-698656 testdata/build --alsologtostderr:
I1227 20:04:35.105561  339441 out.go:360] Setting OutFile to fd 1 ...
I1227 20:04:35.106231  339441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:35.106251  339441 out.go:374] Setting ErrFile to fd 2...
I1227 20:04:35.106258  339441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 20:04:35.106553  339441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
I1227 20:04:35.107330  339441 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:35.108058  339441 config.go:182] Loaded profile config "functional-698656": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 20:04:35.108698  339441 cli_runner.go:164] Run: docker container inspect functional-698656 --format={{.State.Status}}
I1227 20:04:35.129266  339441 ssh_runner.go:195] Run: systemctl --version
I1227 20:04:35.129339  339441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-698656
I1227 20:04:35.148428  339441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/functional-698656/id_rsa Username:docker}
I1227 20:04:35.246131  339441 build_images.go:162] Building image from path: /tmp/build.62402818.tar
I1227 20:04:35.246213  339441 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 20:04:35.254696  339441 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.62402818.tar
I1227 20:04:35.258677  339441 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.62402818.tar: stat -c "%s %y" /var/lib/minikube/build/build.62402818.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.62402818.tar': No such file or directory
I1227 20:04:35.258708  339441 ssh_runner.go:362] scp /tmp/build.62402818.tar --> /var/lib/minikube/build/build.62402818.tar (3072 bytes)
I1227 20:04:35.278289  339441 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.62402818
I1227 20:04:35.296117  339441 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.62402818 -xf /var/lib/minikube/build/build.62402818.tar
I1227 20:04:35.306031  339441 containerd.go:402] Building image: /var/lib/minikube/build/build.62402818
I1227 20:04:35.306120  339441 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.62402818 --local dockerfile=/var/lib/minikube/build/build.62402818 --output type=image,name=localhost/my-image:functional-698656
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:3f68f0aba7f28abaca3c50d85538303a1a02c18a683e7d4806b8582805b0d397 0.0s done
#8 exporting config sha256:f03e3485c55f82272ca4590cffc57c98d77127cfa781c2a732d981d57dea333f 0.0s done
#8 naming to localhost/my-image:functional-698656 done
#8 DONE 0.2s
I1227 20:04:38.365609  339441 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.62402818 --local dockerfile=/var/lib/minikube/build/build.62402818 --output type=image,name=localhost/my-image:functional-698656: (3.059461534s)
I1227 20:04:38.365690  339441 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.62402818
I1227 20:04:38.374346  339441 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.62402818.tar
I1227 20:04:38.382467  339441 build_images.go:218] Built localhost/my-image:functional-698656 from /tmp/build.62402818.tar
I1227 20:04:38.382499  339441 build_images.go:134] succeeded building to: functional-698656
I1227 20:04:38.382505  339441 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656 --alsologtostderr: (1.053225519s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656 --alsologtostderr
2025/12/27 20:04:29 [DEBUG] GET http://127.0.0.1:38713/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-698656 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656 --alsologtostderr: (1.048067002s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-698656 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-698656
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-698656
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-698656
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (172.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1227 20:05:35.677157  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m52.056093403s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (172.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 kubectl -- rollout status deployment/busybox: (4.181009104s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-2b9cm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-czscv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-j4hwg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-2b9cm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-czscv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-j4hwg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-2b9cm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-czscv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-j4hwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-2b9cm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-2b9cm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-czscv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-czscv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-j4hwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 kubectl -- exec busybox-769dd8b7dd-j4hwg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 node add --alsologtostderr -v 5
E1227 20:07:51.828981  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 node add --alsologtostderr -v 5: (29.042265708s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5: (1.06434275s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-575301 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.060037212s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 status --output json --alsologtostderr -v 5: (1.073074748s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp testdata/cp-test.txt ha-575301:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3073518142/001/cp-test_ha-575301.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301:/home/docker/cp-test.txt ha-575301-m02:/home/docker/cp-test_ha-575301_ha-575301-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m02 "sudo cat /home/docker/cp-test_ha-575301_ha-575301-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301:/home/docker/cp-test.txt ha-575301-m03:/home/docker/cp-test_ha-575301_ha-575301-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m03 "sudo cat /home/docker/cp-test_ha-575301_ha-575301-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301:/home/docker/cp-test.txt ha-575301-m04:/home/docker/cp-test_ha-575301_ha-575301-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301 "sudo cat /home/docker/cp-test.txt"
E1227 20:08:19.519114  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m04 "sudo cat /home/docker/cp-test_ha-575301_ha-575301-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp testdata/cp-test.txt ha-575301-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3073518142/001/cp-test_ha-575301-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m02:/home/docker/cp-test.txt ha-575301:/home/docker/cp-test_ha-575301-m02_ha-575301.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301 "sudo cat /home/docker/cp-test_ha-575301-m02_ha-575301.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m02:/home/docker/cp-test.txt ha-575301-m03:/home/docker/cp-test_ha-575301-m02_ha-575301-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m03 "sudo cat /home/docker/cp-test_ha-575301-m02_ha-575301-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m02:/home/docker/cp-test.txt ha-575301-m04:/home/docker/cp-test_ha-575301-m02_ha-575301-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m04 "sudo cat /home/docker/cp-test_ha-575301-m02_ha-575301-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp testdata/cp-test.txt ha-575301-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3073518142/001/cp-test_ha-575301-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m03:/home/docker/cp-test.txt ha-575301:/home/docker/cp-test_ha-575301-m03_ha-575301.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301 "sudo cat /home/docker/cp-test_ha-575301-m03_ha-575301.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m03:/home/docker/cp-test.txt ha-575301-m02:/home/docker/cp-test_ha-575301-m03_ha-575301-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m02 "sudo cat /home/docker/cp-test_ha-575301-m03_ha-575301-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m03:/home/docker/cp-test.txt ha-575301-m04:/home/docker/cp-test_ha-575301-m03_ha-575301-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m04 "sudo cat /home/docker/cp-test_ha-575301-m03_ha-575301-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp testdata/cp-test.txt ha-575301-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3073518142/001/cp-test_ha-575301-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m04:/home/docker/cp-test.txt ha-575301:/home/docker/cp-test_ha-575301-m04_ha-575301.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301 "sudo cat /home/docker/cp-test_ha-575301-m04_ha-575301.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m04:/home/docker/cp-test.txt ha-575301-m02:/home/docker/cp-test_ha-575301-m04_ha-575301-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m02 "sudo cat /home/docker/cp-test_ha-575301-m04_ha-575301-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 cp ha-575301-m04:/home/docker/cp-test.txt ha-575301-m03:/home/docker/cp-test_ha-575301-m04_ha-575301-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 ssh -n ha-575301-m03 "sudo cat /home/docker/cp-test_ha-575301-m04_ha-575301-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 node stop m02 --alsologtostderr -v 5: (12.162350903s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5: exit status 7 (790.629477ms)

                                                
                                                
-- stdout --
	ha-575301
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-575301-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-575301-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-575301-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:08:46.372626  355790 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:08:46.372852  355790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:08:46.372885  355790 out.go:374] Setting ErrFile to fd 2...
	I1227 20:08:46.372922  355790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:08:46.373234  355790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:08:46.373458  355790 out.go:368] Setting JSON to false
	I1227 20:08:46.373523  355790 mustload.go:66] Loading cluster: ha-575301
	I1227 20:08:46.373606  355790 notify.go:221] Checking for updates...
	I1227 20:08:46.374003  355790 config.go:182] Loaded profile config "ha-575301": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:08:46.374038  355790 status.go:174] checking status of ha-575301 ...
	I1227 20:08:46.374586  355790 cli_runner.go:164] Run: docker container inspect ha-575301 --format={{.State.Status}}
	I1227 20:08:46.400157  355790 status.go:371] ha-575301 host status = "Running" (err=<nil>)
	I1227 20:08:46.400181  355790 host.go:66] Checking if "ha-575301" exists ...
	I1227 20:08:46.400486  355790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-575301
	I1227 20:08:46.428295  355790 host.go:66] Checking if "ha-575301" exists ...
	I1227 20:08:46.428605  355790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:08:46.428650  355790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-575301
	I1227 20:08:46.448603  355790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/ha-575301/id_rsa Username:docker}
	I1227 20:08:46.549765  355790 ssh_runner.go:195] Run: systemctl --version
	I1227 20:08:46.556360  355790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:08:46.569015  355790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:08:46.635654  355790 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-27 20:08:46.62603132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:08:46.636169  355790 kubeconfig.go:125] found "ha-575301" server: "https://192.168.49.254:8443"
	I1227 20:08:46.636213  355790 api_server.go:166] Checking apiserver status ...
	I1227 20:08:46.636261  355790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:08:46.649297  355790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1338/cgroup
	I1227 20:08:46.658298  355790 api_server.go:192] apiserver freezer: "9:freezer:/docker/50e2befd182f5420eb6a527839d04c26655a48508872fe15721daad21885e70f/kubepods/burstable/pod91606a49b950a1ffb61cbd39ada3d0e5/8c2b94c17e784ab53dec5ef5b9baf8977cd668567b89b1603081e9ecfa64c444"
	I1227 20:08:46.658382  355790 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/50e2befd182f5420eb6a527839d04c26655a48508872fe15721daad21885e70f/kubepods/burstable/pod91606a49b950a1ffb61cbd39ada3d0e5/8c2b94c17e784ab53dec5ef5b9baf8977cd668567b89b1603081e9ecfa64c444/freezer.state
	I1227 20:08:46.667577  355790 api_server.go:214] freezer state: "THAWED"
	I1227 20:08:46.667609  355790 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 20:08:46.676403  355790 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 20:08:46.676432  355790 status.go:463] ha-575301 apiserver status = Running (err=<nil>)
	I1227 20:08:46.676443  355790 status.go:176] ha-575301 status: &{Name:ha-575301 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:08:46.676460  355790 status.go:174] checking status of ha-575301-m02 ...
	I1227 20:08:46.676763  355790 cli_runner.go:164] Run: docker container inspect ha-575301-m02 --format={{.State.Status}}
	I1227 20:08:46.696855  355790 status.go:371] ha-575301-m02 host status = "Stopped" (err=<nil>)
	I1227 20:08:46.696879  355790 status.go:384] host is not running, skipping remaining checks
	I1227 20:08:46.696886  355790 status.go:176] ha-575301-m02 status: &{Name:ha-575301-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:08:46.696916  355790 status.go:174] checking status of ha-575301-m03 ...
	I1227 20:08:46.697230  355790 cli_runner.go:164] Run: docker container inspect ha-575301-m03 --format={{.State.Status}}
	I1227 20:08:46.716235  355790 status.go:371] ha-575301-m03 host status = "Running" (err=<nil>)
	I1227 20:08:46.716267  355790 host.go:66] Checking if "ha-575301-m03" exists ...
	I1227 20:08:46.716586  355790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-575301-m03
	I1227 20:08:46.743432  355790 host.go:66] Checking if "ha-575301-m03" exists ...
	I1227 20:08:46.743748  355790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:08:46.743787  355790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-575301-m03
	I1227 20:08:46.764534  355790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/ha-575301-m03/id_rsa Username:docker}
	I1227 20:08:46.865244  355790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:08:46.881272  355790 kubeconfig.go:125] found "ha-575301" server: "https://192.168.49.254:8443"
	I1227 20:08:46.881304  355790 api_server.go:166] Checking apiserver status ...
	I1227 20:08:46.881348  355790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:08:46.899790  355790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1418/cgroup
	I1227 20:08:46.909763  355790 api_server.go:192] apiserver freezer: "9:freezer:/docker/1efe4dfb9a603326d5670ee497e6531575b247b9abdd355dcc0bad6b6823d879/kubepods/burstable/pod5d563669327c1dd937f6cbe97d80a650/1db799c2d801667fe7ca7687f5b44fca02192da2c4a07ab878a6f4a316b6ef0c"
	I1227 20:08:46.909845  355790 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1efe4dfb9a603326d5670ee497e6531575b247b9abdd355dcc0bad6b6823d879/kubepods/burstable/pod5d563669327c1dd937f6cbe97d80a650/1db799c2d801667fe7ca7687f5b44fca02192da2c4a07ab878a6f4a316b6ef0c/freezer.state
	I1227 20:08:46.918632  355790 api_server.go:214] freezer state: "THAWED"
	I1227 20:08:46.918664  355790 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 20:08:46.926865  355790 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 20:08:46.926906  355790 status.go:463] ha-575301-m03 apiserver status = Running (err=<nil>)
	I1227 20:08:46.926915  355790 status.go:176] ha-575301-m03 status: &{Name:ha-575301-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:08:46.926958  355790 status.go:174] checking status of ha-575301-m04 ...
	I1227 20:08:46.927335  355790 cli_runner.go:164] Run: docker container inspect ha-575301-m04 --format={{.State.Status}}
	I1227 20:08:46.946335  355790 status.go:371] ha-575301-m04 host status = "Running" (err=<nil>)
	I1227 20:08:46.946362  355790 host.go:66] Checking if "ha-575301-m04" exists ...
	I1227 20:08:46.946669  355790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-575301-m04
	I1227 20:08:46.968260  355790 host.go:66] Checking if "ha-575301-m04" exists ...
	I1227 20:08:46.968718  355790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:08:46.968779  355790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-575301-m04
	I1227 20:08:46.989191  355790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/ha-575301-m04/id_rsa Username:docker}
	I1227 20:08:47.088738  355790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:08:47.103256  355790 status.go:176] ha-575301-m04 status: &{Name:ha-575301-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 node start m02 --alsologtostderr -v 5
E1227 20:08:50.190066  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:50.195472  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:50.205740  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:50.225996  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:50.266326  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:50.346668  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:50.506923  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:50.827203  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:51.468086  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:52.749059  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:08:55.310300  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 node start m02 --alsologtostderr -v 5: (11.226560079s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5
E1227 20:09:00.431396  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5: (1.499788084s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.109222838s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 stop --alsologtostderr -v 5
E1227 20:09:10.672591  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:09:31.152869  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 stop --alsologtostderr -v 5: (37.612707955s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 start --wait true --alsologtostderr -v 5
E1227 20:10:12.113956  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 start --wait true --alsologtostderr -v 5: (1m9.431262237s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 node delete m03 --alsologtostderr -v 5: (9.616176849s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.067648245s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 stop --alsologtostderr -v 5
E1227 20:11:34.034275  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 stop --alsologtostderr -v 5: (36.320376695s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5: exit status 7 (113.02693ms)

                                                
                                                
-- stdout --
	ha-575301
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-575301-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-575301-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:11:37.124397  370510 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:11:37.124543  370510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:11:37.124556  370510 out.go:374] Setting ErrFile to fd 2...
	I1227 20:11:37.124563  370510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:11:37.125000  370510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:11:37.125284  370510 out.go:368] Setting JSON to false
	I1227 20:11:37.125329  370510 mustload.go:66] Loading cluster: ha-575301
	I1227 20:11:37.126088  370510 notify.go:221] Checking for updates...
	I1227 20:11:37.126407  370510 config.go:182] Loaded profile config "ha-575301": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:11:37.126456  370510 status.go:174] checking status of ha-575301 ...
	I1227 20:11:37.127025  370510 cli_runner.go:164] Run: docker container inspect ha-575301 --format={{.State.Status}}
	I1227 20:11:37.148096  370510 status.go:371] ha-575301 host status = "Stopped" (err=<nil>)
	I1227 20:11:37.148118  370510 status.go:384] host is not running, skipping remaining checks
	I1227 20:11:37.148126  370510 status.go:176] ha-575301 status: &{Name:ha-575301 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:11:37.148159  370510 status.go:174] checking status of ha-575301-m02 ...
	I1227 20:11:37.148472  370510 cli_runner.go:164] Run: docker container inspect ha-575301-m02 --format={{.State.Status}}
	I1227 20:11:37.168817  370510 status.go:371] ha-575301-m02 host status = "Stopped" (err=<nil>)
	I1227 20:11:37.168838  370510 status.go:384] host is not running, skipping remaining checks
	I1227 20:11:37.168844  370510 status.go:176] ha-575301-m02 status: &{Name:ha-575301-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:11:37.168865  370510 status.go:174] checking status of ha-575301-m04 ...
	I1227 20:11:37.169166  370510 cli_runner.go:164] Run: docker container inspect ha-575301-m04 --format={{.State.Status}}
	I1227 20:11:37.186403  370510 status.go:371] ha-575301-m04 host status = "Stopped" (err=<nil>)
	I1227 20:11:37.186424  370510 status.go:384] host is not running, skipping remaining checks
	I1227 20:11:37.186431  370510 status.go:176] ha-575301-m04 status: &{Name:ha-575301-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (58.273656298s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (50.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 node add --control-plane --alsologtostderr -v 5
E1227 20:12:51.828561  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 node add --control-plane --alsologtostderr -v 5: (49.048655654s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-575301 status --alsologtostderr -v 5: (1.122313788s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (50.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.127927653s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-470295 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1227 20:13:50.189926  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:14:17.879816  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-470295 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (46.591054203s)
--- PASS: TestJSONOutput/start/Command (46.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-470295 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-470295 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-470295 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-470295 --output=json --user=testUser: (6.052509862s)
--- PASS: TestJSONOutput/stop/Command (6.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-644557 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-644557 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.084223ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5379e82-729f-485c-a555-90588e649edd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-644557] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a5b5a00-8410-4824-b7a6-04fe7b7fb10a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"249bfe9e-ef75-457e-95fb-118189035bbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"390aa322-d218-4a7e-992e-1761a8432e6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig"}}
	{"specversion":"1.0","id":"57368025-cb0f-4d44-8f01-3d67abf650b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube"}}
	{"specversion":"1.0","id":"876c7a3b-e8b6-4afd-85e1-d5337ff463ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8e820002-9c4f-4780-a1eb-80b9b856f13c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a79f7a8-0926-4728-b668-9c22ce441e13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-644557" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-644557
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-122823 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-122823 --network=: (32.528240022s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-122823" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-122823
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-122823: (2.165492421s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.72s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-141816 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-141816 --network=bridge: (28.127647386s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-141816" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-141816
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-141816: (2.099388994s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.26s)

                                                
                                    
x
+
TestKicExistingNetwork (30.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 20:15:41.120772  302541 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 20:15:41.137232  302541 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 20:15:41.137327  302541 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 20:15:41.137344  302541 cli_runner.go:164] Run: docker network inspect existing-network
W1227 20:15:41.152823  302541 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 20:15:41.152858  302541 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 20:15:41.152871  302541 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 20:15:41.152975  302541 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 20:15:41.169986  302541 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-39a3264d8f81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:08:2a:c8:87:59} reservation:<nil>}
I1227 20:15:41.170293  302541 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001605d40}
I1227 20:15:41.170963  302541 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 20:15:41.171036  302541 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 20:15:41.240499  302541 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-521212 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-521212 --network=existing-network: (28.56866312s)
helpers_test.go:176: Cleaning up "existing-network-521212" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-521212
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-521212: (2.171850434s)
I1227 20:16:11.997891  302541 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.90s)

                                                
                                    
x
+
TestKicCustomSubnet (29.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-834831 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-834831 --subnet=192.168.60.0/24: (27.207802223s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-834831 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-834831" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-834831
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-834831: (2.156848062s)
--- PASS: TestKicCustomSubnet (29.39s)

                                                
                                    
x
+
TestKicStaticIP (30.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-282716 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-282716 --static-ip=192.168.200.200: (28.376231055s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-282716 ip
helpers_test.go:176: Cleaning up "static-ip-282716" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-282716
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-282716: (2.194579454s)
--- PASS: TestKicStaticIP (30.74s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (62.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-974698 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-974698 --driver=docker  --container-runtime=containerd: (26.905828042s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-977176 --driver=docker  --container-runtime=containerd
E1227 20:17:51.829535  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-977176 --driver=docker  --container-runtime=containerd: (30.101455385s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-974698
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-977176
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-977176" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-977176
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-977176: (2.104628151s)
helpers_test.go:176: Cleaning up "first-974698" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-974698
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-974698: (2.288864155s)
--- PASS: TestMinikubeProfile (62.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-611286 --memory=3072 --mount-string /tmp/TestMountStartserial76331884/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-611286 --memory=3072 --mount-string /tmp/TestMountStartserial76331884/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.720459636s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-611286 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-613477 --memory=3072 --mount-string /tmp/TestMountStartserial76331884/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-613477 --memory=3072 --mount-string /tmp/TestMountStartserial76331884/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.234824963s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-613477 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-611286 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-611286 --alsologtostderr -v=5: (1.706171545s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-613477 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-613477
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-613477: (1.286610714s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-613477
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-613477: (6.95987284s)
--- PASS: TestMountStart/serial/RestartStopped (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-613477 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728448 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1227 20:18:50.192007  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:19:14.879916  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-728448 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.007168319s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-728448 -- rollout status deployment/busybox: (2.628139437s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-h2pk6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-jj5p2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-h2pk6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-jj5p2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-h2pk6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-jj5p2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-h2pk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-h2pk6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-jj5p2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728448 -- exec busybox-769dd8b7dd-jj5p2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-728448 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-728448 -v=5 --alsologtostderr: (29.365077469s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-728448 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp testdata/cp-test.txt multinode-728448:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile267382921/001/cp-test_multinode-728448.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448:/home/docker/cp-test.txt multinode-728448-m02:/home/docker/cp-test_multinode-728448_multinode-728448-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m02 "sudo cat /home/docker/cp-test_multinode-728448_multinode-728448-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448:/home/docker/cp-test.txt multinode-728448-m03:/home/docker/cp-test_multinode-728448_multinode-728448-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m03 "sudo cat /home/docker/cp-test_multinode-728448_multinode-728448-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp testdata/cp-test.txt multinode-728448-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile267382921/001/cp-test_multinode-728448-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448-m02:/home/docker/cp-test.txt multinode-728448:/home/docker/cp-test_multinode-728448-m02_multinode-728448.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448 "sudo cat /home/docker/cp-test_multinode-728448-m02_multinode-728448.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448-m02:/home/docker/cp-test.txt multinode-728448-m03:/home/docker/cp-test_multinode-728448-m02_multinode-728448-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m03 "sudo cat /home/docker/cp-test_multinode-728448-m02_multinode-728448-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp testdata/cp-test.txt multinode-728448-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile267382921/001/cp-test_multinode-728448-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448-m03:/home/docker/cp-test.txt multinode-728448:/home/docker/cp-test_multinode-728448-m03_multinode-728448.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448 "sudo cat /home/docker/cp-test_multinode-728448-m03_multinode-728448.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 cp multinode-728448-m03:/home/docker/cp-test.txt multinode-728448-m02:/home/docker/cp-test_multinode-728448-m03_multinode-728448-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 ssh -n multinode-728448-m02 "sudo cat /home/docker/cp-test_multinode-728448-m03_multinode-728448-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-728448 node stop m03: (1.351076758s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-728448 status: exit status 7 (530.845693ms)

                                                
                                                
-- stdout --
	multinode-728448
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-728448-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-728448-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-728448 status --alsologtostderr: exit status 7 (544.250801ms)

                                                
                                                
-- stdout --
	multinode-728448
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-728448-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-728448-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:20:51.450943  423580 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:20:51.451320  423580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:20:51.451350  423580 out.go:374] Setting ErrFile to fd 2...
	I1227 20:20:51.451370  423580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:20:51.451721  423580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:20:51.452004  423580 out.go:368] Setting JSON to false
	I1227 20:20:51.452068  423580 mustload.go:66] Loading cluster: multinode-728448
	I1227 20:20:51.452160  423580 notify.go:221] Checking for updates...
	I1227 20:20:51.452529  423580 config.go:182] Loaded profile config "multinode-728448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:20:51.452571  423580 status.go:174] checking status of multinode-728448 ...
	I1227 20:20:51.453115  423580 cli_runner.go:164] Run: docker container inspect multinode-728448 --format={{.State.Status}}
	I1227 20:20:51.473645  423580 status.go:371] multinode-728448 host status = "Running" (err=<nil>)
	I1227 20:20:51.473672  423580 host.go:66] Checking if "multinode-728448" exists ...
	I1227 20:20:51.473984  423580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-728448
	I1227 20:20:51.500423  423580 host.go:66] Checking if "multinode-728448" exists ...
	I1227 20:20:51.500788  423580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:20:51.500860  423580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-728448
	I1227 20:20:51.519055  423580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33281 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/multinode-728448/id_rsa Username:docker}
	I1227 20:20:51.616998  423580 ssh_runner.go:195] Run: systemctl --version
	I1227 20:20:51.623900  423580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:20:51.636778  423580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:20:51.695715  423580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 20:20:51.686597069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:20:51.696258  423580 kubeconfig.go:125] found "multinode-728448" server: "https://192.168.67.2:8443"
	I1227 20:20:51.696294  423580 api_server.go:166] Checking apiserver status ...
	I1227 20:20:51.696343  423580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 20:20:51.708347  423580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1353/cgroup
	I1227 20:20:51.716608  423580 api_server.go:192] apiserver freezer: "9:freezer:/docker/2ad803e4db256998c1a0c43b2bd12df79f162751e2ffdf8fc3df18e32801a75a/kubepods/burstable/poda77a41459acefb33ea18f3306b29b229/fe85515032ad7eb4a1986551d8197008d6d2951548ca3227cfb403413cdde566"
	I1227 20:20:51.716681  423580 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2ad803e4db256998c1a0c43b2bd12df79f162751e2ffdf8fc3df18e32801a75a/kubepods/burstable/poda77a41459acefb33ea18f3306b29b229/fe85515032ad7eb4a1986551d8197008d6d2951548ca3227cfb403413cdde566/freezer.state
	I1227 20:20:51.724842  423580 api_server.go:214] freezer state: "THAWED"
	I1227 20:20:51.724888  423580 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 20:20:51.733113  423580 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 20:20:51.733167  423580 status.go:463] multinode-728448 apiserver status = Running (err=<nil>)
	I1227 20:20:51.733179  423580 status.go:176] multinode-728448 status: &{Name:multinode-728448 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:20:51.733201  423580 status.go:174] checking status of multinode-728448-m02 ...
	I1227 20:20:51.733541  423580 cli_runner.go:164] Run: docker container inspect multinode-728448-m02 --format={{.State.Status}}
	I1227 20:20:51.753605  423580 status.go:371] multinode-728448-m02 host status = "Running" (err=<nil>)
	I1227 20:20:51.753639  423580 host.go:66] Checking if "multinode-728448-m02" exists ...
	I1227 20:20:51.753940  423580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-728448-m02
	I1227 20:20:51.786616  423580 host.go:66] Checking if "multinode-728448-m02" exists ...
	I1227 20:20:51.786941  423580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 20:20:51.786988  423580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-728448-m02
	I1227 20:20:51.804604  423580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33286 SSHKeyPath:/home/jenkins/minikube-integration/22332-300670/.minikube/machines/multinode-728448-m02/id_rsa Username:docker}
	I1227 20:20:51.904428  423580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 20:20:51.917300  423580 status.go:176] multinode-728448-m02 status: &{Name:multinode-728448-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:20:51.917336  423580 status.go:174] checking status of multinode-728448-m03 ...
	I1227 20:20:51.917664  423580 cli_runner.go:164] Run: docker container inspect multinode-728448-m03 --format={{.State.Status}}
	I1227 20:20:51.934336  423580 status.go:371] multinode-728448-m03 host status = "Stopped" (err=<nil>)
	I1227 20:20:51.934361  423580 status.go:384] host is not running, skipping remaining checks
	I1227 20:20:51.934368  423580 status.go:176] multinode-728448-m03 status: &{Name:multinode-728448-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-728448 node start m03 -v=5 --alsologtostderr: (7.038742098s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-728448
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-728448
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-728448: (25.294620497s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728448 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-728448 --wait=true -v=5 --alsologtostderr: (49.04174927s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-728448
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-728448 node delete m03: (4.930639788s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-728448 stop: (23.999341052s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-728448 status: exit status 7 (96.478877ms)

                                                
                                                
-- stdout --
	multinode-728448
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-728448-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-728448 status --alsologtostderr: exit status 7 (100.693153ms)

                                                
                                                
-- stdout --
	multinode-728448
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-728448-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:22:43.977106  432351 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:22:43.977293  432351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:22:43.977327  432351 out.go:374] Setting ErrFile to fd 2...
	I1227 20:22:43.977353  432351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:22:43.977630  432351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:22:43.977857  432351 out.go:368] Setting JSON to false
	I1227 20:22:43.977922  432351 mustload.go:66] Loading cluster: multinode-728448
	I1227 20:22:43.978009  432351 notify.go:221] Checking for updates...
	I1227 20:22:43.978407  432351 config.go:182] Loaded profile config "multinode-728448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:22:43.978443  432351 status.go:174] checking status of multinode-728448 ...
	I1227 20:22:43.979033  432351 cli_runner.go:164] Run: docker container inspect multinode-728448 --format={{.State.Status}}
	I1227 20:22:44.000033  432351 status.go:371] multinode-728448 host status = "Stopped" (err=<nil>)
	I1227 20:22:44.000061  432351 status.go:384] host is not running, skipping remaining checks
	I1227 20:22:44.000069  432351 status.go:176] multinode-728448 status: &{Name:multinode-728448 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 20:22:44.000106  432351 status.go:174] checking status of multinode-728448-m02 ...
	I1227 20:22:44.000481  432351 cli_runner.go:164] Run: docker container inspect multinode-728448-m02 --format={{.State.Status}}
	I1227 20:22:44.031052  432351 status.go:371] multinode-728448-m02 host status = "Stopped" (err=<nil>)
	I1227 20:22:44.031080  432351 status.go:384] host is not running, skipping remaining checks
	I1227 20:22:44.031088  432351 status.go:176] multinode-728448-m02 status: &{Name:multinode-728448-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728448 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1227 20:22:51.828698  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-728448 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.707639515s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728448 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-728448
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728448-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-728448-m02 --driver=docker  --container-runtime=containerd: exit status 14 (90.091154ms)

                                                
                                                
-- stdout --
	* [multinode-728448-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-728448-m02' is duplicated with machine name 'multinode-728448-m02' in profile 'multinode-728448'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728448-m03 --driver=docker  --container-runtime=containerd
E1227 20:23:50.190671  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-728448-m03 --driver=docker  --container-runtime=containerd: (27.199573392s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-728448
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-728448: exit status 80 (335.666553ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-728448 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-728448-m03 already exists in multinode-728448-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-728448-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-728448-m03: (2.052072062s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.73s)

                                                
                                    
x
+
TestScheduledStopUnix (103.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-023292 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-023292 --memory=3072 --driver=docker  --container-runtime=containerd: (27.441644185s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-023292 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:24:34.919547  441825 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:24:34.919736  441825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:24:34.919771  441825 out.go:374] Setting ErrFile to fd 2...
	I1227 20:24:34.919794  441825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:24:34.920188  441825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:24:34.920543  441825 out.go:368] Setting JSON to false
	I1227 20:24:34.920707  441825 mustload.go:66] Loading cluster: scheduled-stop-023292
	I1227 20:24:34.921512  441825 config.go:182] Loaded profile config "scheduled-stop-023292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:24:34.921639  441825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/scheduled-stop-023292/config.json ...
	I1227 20:24:34.921869  441825 mustload.go:66] Loading cluster: scheduled-stop-023292
	I1227 20:24:34.922032  441825 config.go:182] Loaded profile config "scheduled-stop-023292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-023292 -n scheduled-stop-023292
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-023292 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:24:35.356884  441910 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:24:35.357026  441910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:24:35.357039  441910 out.go:374] Setting ErrFile to fd 2...
	I1227 20:24:35.357066  441910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:24:35.357526  441910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:24:35.357895  441910 out.go:368] Setting JSON to false
	I1227 20:24:35.358109  441910 daemonize_unix.go:73] killing process 441843 as it is an old scheduled stop
	I1227 20:24:35.358185  441910 mustload.go:66] Loading cluster: scheduled-stop-023292
	I1227 20:24:35.363696  441910 config.go:182] Loaded profile config "scheduled-stop-023292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:24:35.363799  441910 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/scheduled-stop-023292/config.json ...
	I1227 20:24:35.364036  441910 mustload.go:66] Loading cluster: scheduled-stop-023292
	I1227 20:24:35.364171  441910 config.go:182] Loaded profile config "scheduled-stop-023292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 20:24:35.368647  302541 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/scheduled-stop-023292/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-023292 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-023292 -n scheduled-stop-023292
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-023292
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-023292 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 20:25:01.348775  442585 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:25:01.348988  442585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:25:01.349161  442585 out.go:374] Setting ErrFile to fd 2...
	I1227 20:25:01.349203  442585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:25:01.349515  442585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:25:01.349893  442585 out.go:368] Setting JSON to false
	I1227 20:25:01.350039  442585 mustload.go:66] Loading cluster: scheduled-stop-023292
	I1227 20:25:01.350734  442585 config.go:182] Loaded profile config "scheduled-stop-023292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:25:01.350895  442585 profile.go:143] Saving config to /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/scheduled-stop-023292/config.json ...
	I1227 20:25:01.351392  442585 mustload.go:66] Loading cluster: scheduled-stop-023292
	I1227 20:25:01.351587  442585 config.go:182] Loaded profile config "scheduled-stop-023292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
E1227 20:25:13.240387  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-023292
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-023292: exit status 7 (69.301124ms)

                                                
                                                
-- stdout --
	scheduled-stop-023292
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-023292 -n scheduled-stop-023292
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-023292 -n scheduled-stop-023292: exit status 7 (69.147874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-023292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-023292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-023292: (4.822203491s)
--- PASS: TestScheduledStopUnix (103.90s)

                                                
                                    
x
+
TestInsufficientStorage (12.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-204788 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-204788 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.028819301s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d85d638d-fa03-425f-b5b8-e5f38256f286","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-204788] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8de0c7e-0e61-48db-9f43-17bf708a586f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22332"}}
	{"specversion":"1.0","id":"fd3c01f5-c64d-4bb5-96e0-6fca9f2580f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b839c5d-0330-4420-a8e6-6f77123c39f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig"}}
	{"specversion":"1.0","id":"e0772b10-3e71-48dd-86e3-908dd5e95ff5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube"}}
	{"specversion":"1.0","id":"563c6b36-bb27-4bd7-b913-d961f734086b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"39bae011-ca1c-4268-b67b-86f9328d204a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cdeaa1be-a1b9-467c-9c38-9a408c33176d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8e1ff562-940b-401b-b7bb-dc8e8544413e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c18f7eb3-463e-4124-9f6e-17490bb837ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a84798a-dc4b-4735-ad58-4d4636ab50ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dd0e9832-44fc-43cb-a56e-90f67ea9b381","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-204788\" primary control-plane node in \"insufficient-storage-204788\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"10109688-450e-492c-bbc3-768dac9647e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f04da38-e32d-4855-aa06-a173aafa716b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"11f39923-7111-4552-af66-be71f1e09a39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-204788 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-204788 --output=json --layout=cluster: exit status 7 (289.682553ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-204788","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-204788","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:26:01.625703  444424 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-204788" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-204788 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-204788 --output=json --layout=cluster: exit status 7 (300.969471ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-204788","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-204788","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 20:26:01.925162  444489 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-204788" does not appear in /home/jenkins/minikube-integration/22332-300670/kubeconfig
	E1227 20:26:01.935170  444489 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/insufficient-storage-204788/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-204788" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-204788
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-204788: (1.946726191s)
--- PASS: TestInsufficientStorage (12.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (319.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1127096449 start -p running-upgrade-108405 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1227 20:32:51.829070  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1127096449 start -p running-upgrade-108405 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (36.597749374s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-108405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-108405 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.929577574s)
helpers_test.go:176: Cleaning up "running-upgrade-108405" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-108405
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-108405: (1.959725736s)
--- PASS: TestRunningBinaryUpgrade (319.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (330.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-893436 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-893436 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.893299862s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-893436 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-893436 --alsologtostderr: (1.353325426s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-893436 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-893436 status --format={{.Host}}: exit status 7 (74.824138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-893436 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1227 20:27:51.829410  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-893436 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.074766661s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-893436 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-893436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-893436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (143.791794ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-893436] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-893436
	    minikube start -p kubernetes-upgrade-893436 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8934362 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-893436 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-893436 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-893436 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (12.710533581s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-893436" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-893436
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-893436: (2.472688046s)
--- PASS: TestKubernetesUpgrade (330.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (129.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3139374454 start -p missing-upgrade-369981 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3139374454 start -p missing-upgrade-369981 --memory=3072 --driver=docker  --container-runtime=containerd: (1m3.151850492s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-369981
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-369981
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-369981 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-369981 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.474143357s)
helpers_test.go:176: Cleaning up "missing-upgrade-369981" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-369981
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-369981: (2.383385411s)
--- PASS: TestMissingContainerUpgrade (129.72s)

                                                
                                    
x
+
TestPause/serial/Start (56.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-799599 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-799599 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.414409915s)
--- PASS: TestPause/serial/Start (56.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-799599 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-799599 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.125552361s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.15s)

                                                
                                    
x
+
TestPause/serial/Pause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-799599 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-799599 --alsologtostderr -v=5: (1.073965443s)
--- PASS: TestPause/serial/Pause (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-799599 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-799599 --output=json --layout=cluster: exit status 2 (319.816607ms)

                                                
                                                
-- stdout --
	{"Name":"pause-799599","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-799599","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-799599 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-799599 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.41s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-799599 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-799599 --alsologtostderr -v=5: (2.409307506s)
--- PASS: TestPause/serial/DeletePaused (2.41s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-799599
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-799599: exit status 1 (16.747328ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-799599: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (305.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.41577964 start -p stopped-upgrade-732033 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.41577964 start -p stopped-upgrade-732033 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.628997044s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.41577964 -p stopped-upgrade-732033 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.41577964 -p stopped-upgrade-732033 stop: (1.25657394s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-732033 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1227 20:28:50.189739  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-732033 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m31.677725702s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (305.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-732033
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-732033: (2.341296478s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.34s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (65.74s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-566884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1227 20:33:50.189659  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-566884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (58.897365676s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-566884 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-566884
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-566884: (5.962163932s)
--- PASS: TestPreload/Start-NoPreload-PullImage (65.74s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (47.18s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-566884 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-566884 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (46.879958182s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-566884 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (47.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130300 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-130300 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (97.924202ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-130300] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130300 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-130300 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.911705148s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-130300 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-130300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.503491592s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-130300 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-130300 status -o json: exit status 2 (310.286992ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-130300","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-130300
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-130300: (1.977929087s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-130300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.43981678s)
--- PASS: TestNoKubernetes/serial/Start (7.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22332-300670/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-130300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-130300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.37547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-130300
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-130300: (1.290705317s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130300 --driver=docker  --container-runtime=containerd
E1227 20:38:50.189327  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-130300 --driver=docker  --container-runtime=containerd: (6.577176155s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-130300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-130300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.444092ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-779255 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-779255 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (186.61041ms)

                                                
                                                
-- stdout --
	* [false-779255] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22332
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 20:39:01.788708  506379 out.go:360] Setting OutFile to fd 1 ...
	I1227 20:39:01.788934  506379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:39:01.788962  506379 out.go:374] Setting ErrFile to fd 2...
	I1227 20:39:01.788983  506379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 20:39:01.789893  506379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22332-300670/.minikube/bin
	I1227 20:39:01.790370  506379 out.go:368] Setting JSON to false
	I1227 20:39:01.791233  506379 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8493,"bootTime":1766859449,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1227 20:39:01.791308  506379 start.go:143] virtualization:  
	I1227 20:39:01.794664  506379 out.go:179] * [false-779255] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 20:39:01.798424  506379 out.go:179]   - MINIKUBE_LOCATION=22332
	I1227 20:39:01.798594  506379 notify.go:221] Checking for updates...
	I1227 20:39:01.804294  506379 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 20:39:01.807372  506379 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22332-300670/kubeconfig
	I1227 20:39:01.810295  506379 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22332-300670/.minikube
	I1227 20:39:01.813205  506379 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 20:39:01.816119  506379 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 20:39:01.819626  506379 config.go:182] Loaded profile config "force-systemd-env-857112": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 20:39:01.819745  506379 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 20:39:01.842192  506379 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 20:39:01.842306  506379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 20:39:01.907986  506379 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 20:39:01.897754315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 20:39:01.908104  506379 docker.go:319] overlay module found
	I1227 20:39:01.911371  506379 out.go:179] * Using the docker driver based on user configuration
	I1227 20:39:01.914510  506379 start.go:309] selected driver: docker
	I1227 20:39:01.914533  506379 start.go:928] validating driver "docker" against <nil>
	I1227 20:39:01.914548  506379 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 20:39:01.917985  506379 out.go:203] 
	W1227 20:39:01.920869  506379 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1227 20:39:01.923756  506379 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-779255 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-779255" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-779255

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-779255"

                                                
                                                
----------------------- debugLogs end: false-779255 [took: 3.30890324s] --------------------------------
helpers_test.go:176: Cleaning up "false-779255" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-779255
--- PASS: TestNetworkPlugins/group/false (3.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-551586 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-551586 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m0.492267275s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-551586 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [70a74397-6168-4c01-a6b6-a664924b8467] Pending
helpers_test.go:353: "busybox" [70a74397-6168-4c01-a6b6-a664924b8467] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [70a74397-6168-4c01-a6b6-a664924b8467] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004463251s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-551586 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-551586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-551586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.062732449s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-551586 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-551586 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-551586 --alsologtostderr -v=3: (12.095584925s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-551586 -n old-k8s-version-551586
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-551586 -n old-k8s-version-551586: exit status 7 (71.556393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-551586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-551586 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-551586 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (46.997338296s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-551586 -n old-k8s-version-551586
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5czm8" [2f957430-da57-4f7c-bc3b-ee54a0f41565] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003382028s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5czm8" [2f957430-da57-4f7c-bc3b-ee54a0f41565] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003117597s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-551586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-551586 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-551586 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-551586 -n old-k8s-version-551586
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-551586 -n old-k8s-version-551586: exit status 2 (400.18838ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-551586 -n old-k8s-version-551586
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-551586 -n old-k8s-version-551586: exit status 2 (331.311415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-551586 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-551586 -n old-k8s-version-551586
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-551586 -n old-k8s-version-551586
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-259913 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-259913 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (52.862531696s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-259913 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [21c4e872-9aff-496b-90ed-ac934821bba2] Pending
helpers_test.go:353: "busybox" [21c4e872-9aff-496b-90ed-ac934821bba2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [21c4e872-9aff-496b-90ed-ac934821bba2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003759946s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-259913 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-259913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-259913 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-259913 --alsologtostderr -v=3
E1227 20:47:51.828708  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-259913 --alsologtostderr -v=3: (12.109863472s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-259913 -n no-preload-259913
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-259913 -n no-preload-259913: exit status 7 (67.68136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-259913 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-259913 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1227 20:48:50.189928  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-259913 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (49.221136894s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-259913 -n no-preload-259913
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-m7jtj" [0a095861-522d-41ef-b5f2-47c9d7890817] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00287758s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-m7jtj" [0a095861-522d-41ef-b5f2-47c9d7890817] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002838293s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-259913 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-259913 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-259913 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-259913 -n no-preload-259913
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-259913 -n no-preload-259913: exit status 2 (319.157549ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-259913 -n no-preload-259913
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-259913 -n no-preload-259913: exit status 2 (335.356438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-259913 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-259913 -n no-preload-259913
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-259913 -n no-preload-259913
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-920276 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-920276 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (44.389670538s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-920276 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [95fe21fc-bb1b-46e8-9941-e314e36b639f] Pending
helpers_test.go:353: "busybox" [95fe21fc-bb1b-46e8-9941-e314e36b639f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [95fe21fc-bb1b-46e8-9941-e314e36b639f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003126095s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-920276 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-920276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-920276 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-920276 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-920276 --alsologtostderr -v=3: (12.147433055s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-920276 -n embed-certs-920276
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-920276 -n embed-certs-920276: exit status 7 (74.041257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-920276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-920276 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1227 20:50:16.629181  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:16.634548  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:16.644815  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:16.665104  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:16.705375  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:16.785569  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:16.945925  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:17.266564  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:17.907357  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:19.187643  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:21.748764  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:26.869834  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:37.110084  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:50:57.590478  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-920276 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (52.380490812s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-920276 -n embed-certs-920276
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-msg56" [f124a571-6d1b-4b6d-8c0a-f36511c8daf4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003707495s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-052065 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-052065 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.019242989s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-msg56" [f124a571-6d1b-4b6d-8c0a-f36511c8daf4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004292751s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-920276 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-920276 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-920276 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-920276 -n embed-certs-920276
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-920276 -n embed-certs-920276: exit status 2 (423.559112ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-920276 -n embed-certs-920276
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-920276 -n embed-certs-920276: exit status 2 (455.101142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-920276 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-920276 -n embed-certs-920276
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-920276 -n embed-certs-920276
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-298041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1227 20:51:38.551294  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-298041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (34.705021027s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-052065 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5276a3eb-8472-42ef-9f7f-e8ca5f42bdbb] Pending
helpers_test.go:353: "busybox" [5276a3eb-8472-42ef-9f7f-e8ca5f42bdbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5276a3eb-8472-42ef-9f7f-e8ca5f42bdbb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004756786s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-052065 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-298041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-298041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.128681849s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-298041 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-298041 --alsologtostderr -v=3: (1.45405025s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-298041 -n newest-cni-298041
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-298041 -n newest-cni-298041: exit status 7 (78.940958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-298041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-298041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-298041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (13.951401592s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-298041 -n newest-cni-298041
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-052065 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-052065 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.323533092s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-052065 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-052065 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-052065 --alsologtostderr -v=3: (12.556064864s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-298041 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-298041 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-298041 -n newest-cni-298041
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-298041 -n newest-cni-298041: exit status 2 (327.432585ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-298041 -n newest-cni-298041
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-298041 -n newest-cni-298041: exit status 2 (315.605035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-298041 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-298041 -n newest-cni-298041
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-298041 -n newest-cni-298041
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065: exit status 7 (108.761778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-052065 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-052065 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-052065 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (52.495591661s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.87s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (5.98s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-145837 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-145837 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (5.725091289s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-145837" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-145837
--- PASS: TestPreload/PreloadSrc/gcs (5.98s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (8.07s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-969616 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
E1227 20:52:34.880390  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:38.441463  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:38.446756  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:38.457034  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:38.477321  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:38.518084  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:38.598503  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:38.758756  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-969616 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (7.826703915s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-969616" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-969616
E1227 20:52:39.078904  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPreload/PreloadSrc/github (8.07s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.76s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-749244 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-749244" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-749244
E1227 20:52:39.720344  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1227 20:52:41.001327  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:43.561571  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:48.682264  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:51.829062  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/addons-829359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:52:58.922947  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:53:00.471898  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (47.617201009s)
--- PASS: TestNetworkPlugins/group/auto/Start (47.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-c475d" [e0f7b202-7d90-4aa6-8018-8a1e1ae86441] Running
E1227 20:53:19.403867  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003524851s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-c475d" [e0f7b202-7d90-4aa6-8018-8a1e1ae86441] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012538906s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-052065 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-052065 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-779255 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-052065 --alsologtostderr -v=1
I1227 20:53:27.941282  302541 config.go:182] Loaded profile config "auto-779255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065: exit status 2 (337.528224ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065: exit status 2 (446.909655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-052065 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-052065 -n default-k8s-diff-port-052065
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)
E1227 20:58:22.807440  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.292938  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.298176  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.308455  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.328860  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.369155  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.449599  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.610052  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:28.930668  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:29.571689  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:30.852215  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:33.241758  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:33.413202  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:38.533978  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-779255 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-l4s8j" [c2585ba3-b119-4d7b-ada4-4e607db455d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-l4s8j" [c2585ba3-b119-4d7b-ada4-4e607db455d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003571456s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (52.149524518s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-779255 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (59.452283353s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-pxvh8" [dba14351-aae6-4732-9f71-0d14810ad3d9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003344153s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-779255 "pgrep -a kubelet"
I1227 20:54:33.411704  302541 config.go:182] Loaded profile config "kindnet-779255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-779255 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-6dkfr" [124dea5c-f518-4449-a910-e173c68fa29e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-6dkfr" [124dea5c-f518-4449-a910-e173c68fa29e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.002945016s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-779255 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-lhkg5" [39a6c537-d726-400b-bab9-f690d8527c26] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003604886s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.599176643s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-779255 "pgrep -a kubelet"
I1227 20:55:11.195443  302541 config.go:182] Loaded profile config "calico-779255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-779255 replace --force -f testdata/netcat-deployment.yaml
I1227 20:55:11.544072  302541 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-ddd5w" [3cb43db8-2b2b-450f-87fe-3566485018e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-ddd5w" [3cb43db8-2b2b-450f-87fe-3566485018e0] Running
E1227 20:55:16.629690  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/old-k8s-version-551586/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:55:22.293891  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003964763s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-779255 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m6.691448071s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-779255 "pgrep -a kubelet"
I1227 20:56:04.283271  302541 config.go:182] Loaded profile config "custom-flannel-779255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-779255 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-f79nz" [aa98593c-dbcd-457e-af01-540e7ecd1817] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-f79nz" [aa98593c-dbcd-457e-af01-540e7ecd1817] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003839548s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-779255 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.616247057s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-779255 "pgrep -a kubelet"
I1227 20:56:57.013044  302541 config.go:182] Loaded profile config "enable-default-cni-779255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-779255 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-vwqlj" [36776a2c-40f4-492e-a73f-9e8c68b1ae6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 20:57:00.886148  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:00.891426  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:00.901698  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:00.922102  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:00.962366  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:01.042560  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:01.202886  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-vwqlj" [36776a2c-40f4-492e-a73f-9e8c68b1ae6e] Running
E1227 20:57:01.523070  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:02.163944  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:03.444587  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:57:06.004778  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004439115s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-779255 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-779255 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m12.634382559s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-rlj97" [189f8db8-671a-495c-a334-2710aa8c08f0] Running
E1227 20:57:38.440841  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/no-preload-259913/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004779283s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-779255 "pgrep -a kubelet"
I1227 20:57:39.364456  302541 config.go:182] Loaded profile config "flannel-779255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-779255 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-nnc7l" [716d9ecc-58f5-4c2c-b6d5-8d37bf69151b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 20:57:41.846312  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/default-k8s-diff-port-052065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-nnc7l" [716d9ecc-58f5-4c2c-b6d5-8d37bf69151b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004229744s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-779255 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-779255 "pgrep -a kubelet"
I1227 20:58:40.919697  302541 config.go:182] Loaded profile config "bridge-779255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-779255 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rcw8n" [040e4c10-4f20-45bd-b6ee-34a8989a79f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rcw8n" [040e4c10-4f20-45bd-b6ee-34a8989a79f2] Running
E1227 20:58:48.774530  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/auto-779255/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 20:58:50.190017  302541 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22332-300670/.minikube/profiles/functional-698656/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003710463s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-779255 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-779255 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/337)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-902432 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-902432" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-902432
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-283752" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-283752
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-779255 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-779255" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-779255

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-779255"

                                                
                                                
----------------------- debugLogs end: kubenet-779255 [took: 3.614791036s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-779255" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-779255
--- SKIP: TestNetworkPlugins/group/kubenet (3.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-779255 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-779255" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-779255

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-779255" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-779255"

                                                
                                                
----------------------- debugLogs end: cilium-779255 [took: 3.668735858s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-779255" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-779255
--- SKIP: TestNetworkPlugins/group/cilium (3.82s)

                                                
                                    
Copied to clipboard