Test Report: Docker_Linux_containerd_arm64 22343

                    
                      72a35eba785b899784aeadb9114946ce54d68eef:2025-12-27:43008
                    
                

Test fail (2/337)

Order failed test Duration
52 TestForceSystemdFlag 503.95
53 TestForceSystemdEnv 507.95
x
+
TestForceSystemdFlag (503.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-027208 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1227 10:11:55.168043 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-027208 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m20.207805904s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-027208] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-027208" primary control-plane node in "force-systemd-flag-027208" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:10:14.060682 3738115 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:10:14.060840 3738115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:10:14.060853 3738115 out.go:374] Setting ErrFile to fd 2...
	I1227 10:10:14.060859 3738115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:10:14.061129 3738115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 10:10:14.061557 3738115 out.go:368] Setting JSON to false
	I1227 10:10:14.062452 3738115 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":57166,"bootTime":1766773048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 10:10:14.062522 3738115 start.go:143] virtualization:  
	I1227 10:10:14.066189 3738115 out.go:179] * [force-systemd-flag-027208] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:10:14.070968 3738115 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:10:14.071126 3738115 notify.go:221] Checking for updates...
	I1227 10:10:14.077634 3738115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:10:14.080928 3738115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 10:10:14.084146 3738115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 10:10:14.087414 3738115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:10:14.090571 3738115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:10:14.094274 3738115 config.go:182] Loaded profile config "force-systemd-env-194624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:10:14.094431 3738115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:10:14.131713 3738115 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:10:14.131835 3738115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:10:14.222716 3738115 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:10:14.212351353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:10:14.222833 3738115 docker.go:319] overlay module found
	I1227 10:10:14.226201 3738115 out.go:179] * Using the docker driver based on user configuration
	I1227 10:10:14.229183 3738115 start.go:309] selected driver: docker
	I1227 10:10:14.229209 3738115 start.go:928] validating driver "docker" against <nil>
	I1227 10:10:14.229223 3738115 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:10:14.229983 3738115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:10:14.283479 3738115 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:10:14.273728372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:10:14.283631 3738115 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:10:14.283847 3738115 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 10:10:14.286995 3738115 out.go:179] * Using Docker driver with root privileges
	I1227 10:10:14.290011 3738115 cni.go:84] Creating CNI manager for ""
	I1227 10:10:14.290080 3738115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:10:14.290097 3738115 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:10:14.290178 3738115 start.go:353] cluster config:
	{Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1227 10:10:14.293396 3738115 out.go:179] * Starting "force-systemd-flag-027208" primary control-plane node in "force-systemd-flag-027208" cluster
	I1227 10:10:14.296262 3738115 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 10:10:14.299201 3738115 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:10:14.302027 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:10:14.302080 3738115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 10:10:14.302089 3738115 cache.go:65] Caching tarball of preloaded images
	I1227 10:10:14.302190 3738115 preload.go:251] Found /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 10:10:14.302205 3738115 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 10:10:14.302312 3738115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json ...
	I1227 10:10:14.302339 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json: {Name:mk8e499633705fb35f3a63ac14b480b9b5477cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:14.302514 3738115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:10:14.324411 3738115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:10:14.324434 3738115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:10:14.324451 3738115 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:10:14.324490 3738115 start.go:360] acquireMachinesLock for force-systemd-flag-027208: {Name:mk408a0d777415c6b3bf75190db8aa17e71bedcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:10:14.324601 3738115 start.go:364] duration metric: took 89.656µs to acquireMachinesLock for "force-systemd-flag-027208"
	I1227 10:10:14.324631 3738115 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 10:10:14.324705 3738115 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:10:14.328143 3738115 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:10:14.328386 3738115 start.go:159] libmachine.API.Create for "force-systemd-flag-027208" (driver="docker")
	I1227 10:10:14.328425 3738115 client.go:173] LocalClient.Create starting
	I1227 10:10:14.328500 3738115 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem
	I1227 10:10:14.328539 3738115 main.go:144] libmachine: Decoding PEM data...
	I1227 10:10:14.328557 3738115 main.go:144] libmachine: Parsing certificate...
	I1227 10:10:14.328611 3738115 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem
	I1227 10:10:14.328633 3738115 main.go:144] libmachine: Decoding PEM data...
	I1227 10:10:14.328646 3738115 main.go:144] libmachine: Parsing certificate...
	I1227 10:10:14.329018 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:10:14.345559 3738115 cli_runner.go:211] docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:10:14.345658 3738115 network_create.go:284] running [docker network inspect force-systemd-flag-027208] to gather additional debugging logs...
	I1227 10:10:14.345680 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208
	W1227 10:10:14.361855 3738115 cli_runner.go:211] docker network inspect force-systemd-flag-027208 returned with exit code 1
	I1227 10:10:14.361884 3738115 network_create.go:287] error running [docker network inspect force-systemd-flag-027208]: docker network inspect force-systemd-flag-027208: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-027208 not found
	I1227 10:10:14.361897 3738115 network_create.go:289] output of [docker network inspect force-systemd-flag-027208]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-027208 not found
	
	** /stderr **
	I1227 10:10:14.362011 3738115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:10:14.379980 3738115 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d8712ba8a9f7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9e:f2:5a:61:6a:4e} reservation:<nil>}
	I1227 10:10:14.380333 3738115 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-43ae11d059eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:6d:b0:96:78:2a} reservation:<nil>}
	I1227 10:10:14.380708 3738115 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c4bd1426b4b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:5d:63:1e:36:ed} reservation:<nil>}
	I1227 10:10:14.380950 3738115 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a07a37a22614 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:04:fd:b9:e2:9a} reservation:<nil>}
	I1227 10:10:14.381366 3738115 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d1ce0}
	I1227 10:10:14.381389 3738115 network_create.go:124] attempt to create docker network force-systemd-flag-027208 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:10:14.381445 3738115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-027208 force-systemd-flag-027208
	I1227 10:10:14.441506 3738115 network_create.go:108] docker network force-systemd-flag-027208 192.168.85.0/24 created
	I1227 10:10:14.441539 3738115 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-027208" container
	I1227 10:10:14.441612 3738115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:10:14.457713 3738115 cli_runner.go:164] Run: docker volume create force-systemd-flag-027208 --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:10:14.476328 3738115 oci.go:103] Successfully created a docker volume force-systemd-flag-027208
	I1227 10:10:14.476443 3738115 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-027208-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --entrypoint /usr/bin/test -v force-systemd-flag-027208:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:10:15.042844 3738115 oci.go:107] Successfully prepared a docker volume force-systemd-flag-027208
	I1227 10:10:15.042916 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:10:15.042928 3738115 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:10:15.043044 3738115 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-027208:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:10:18.934663 3738115 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-027208:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.891575702s)
	I1227 10:10:18.934700 3738115 kic.go:203] duration metric: took 3.891766533s to extract preloaded images to volume ...
	W1227 10:10:18.934838 3738115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:10:18.934972 3738115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:10:18.984807 3738115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-027208 --name force-systemd-flag-027208 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-027208 --network force-systemd-flag-027208 --ip 192.168.85.2 --volume force-systemd-flag-027208:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:10:19.288318 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Running}}
	I1227 10:10:19.312460 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
	I1227 10:10:19.332923 3738115 cli_runner.go:164] Run: docker exec force-systemd-flag-027208 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:10:19.398079 3738115 oci.go:144] the created container "force-systemd-flag-027208" has a running status.
	I1227 10:10:19.398134 3738115 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa...
	I1227 10:10:19.979164 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 10:10:19.979299 3738115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:10:19.999194 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
	I1227 10:10:20.030475 3738115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:10:20.030501 3738115 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-027208 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:10:20.074535 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
	I1227 10:10:20.093820 3738115 machine.go:94] provisionDockerMachine start ...
	I1227 10:10:20.093949 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:20.121792 3738115 main.go:144] libmachine: Using SSH client type: native
	I1227 10:10:20.122155 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36225 <nil> <nil>}
	I1227 10:10:20.122171 3738115 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:10:20.122773 3738115 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51694->127.0.0.1:36225: read: connection reset by peer
	I1227 10:10:23.267068 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-027208
	
	I1227 10:10:23.267094 3738115 ubuntu.go:182] provisioning hostname "force-systemd-flag-027208"
	I1227 10:10:23.267161 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:23.286197 3738115 main.go:144] libmachine: Using SSH client type: native
	I1227 10:10:23.286515 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36225 <nil> <nil>}
	I1227 10:10:23.286534 3738115 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-027208 && echo "force-systemd-flag-027208" | sudo tee /etc/hostname
	I1227 10:10:23.437194 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-027208
	
	I1227 10:10:23.437279 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:23.456503 3738115 main.go:144] libmachine: Using SSH client type: native
	I1227 10:10:23.456885 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36225 <nil> <nil>}
	I1227 10:10:23.456913 3738115 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-027208' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-027208/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-027208' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:10:23.595282 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:10:23.595307 3738115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-3531265/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-3531265/.minikube}
	I1227 10:10:23.595327 3738115 ubuntu.go:190] setting up certificates
	I1227 10:10:23.595336 3738115 provision.go:84] configureAuth start
	I1227 10:10:23.595398 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
	I1227 10:10:23.612849 3738115 provision.go:143] copyHostCerts
	I1227 10:10:23.612896 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
	I1227 10:10:23.612928 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem, removing ...
	I1227 10:10:23.612938 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
	I1227 10:10:23.613020 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem (1082 bytes)
	I1227 10:10:23.613112 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
	I1227 10:10:23.613137 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem, removing ...
	I1227 10:10:23.613147 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
	I1227 10:10:23.613184 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem (1123 bytes)
	I1227 10:10:23.613236 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
	I1227 10:10:23.613270 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem, removing ...
	I1227 10:10:23.613277 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
	I1227 10:10:23.613304 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem (1675 bytes)
	I1227 10:10:23.613366 3738115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-027208 san=[127.0.0.1 192.168.85.2 force-systemd-flag-027208 localhost minikube]
	I1227 10:10:24.133708 3738115 provision.go:177] copyRemoteCerts
	I1227 10:10:24.133787 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:10:24.133831 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.151314 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.250894 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 10:10:24.250995 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:10:24.269969 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 10:10:24.270032 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 10:10:24.289161 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 10:10:24.289239 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:10:24.306849 3738115 provision.go:87] duration metric: took 711.49982ms to configureAuth
	I1227 10:10:24.306875 3738115 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:10:24.307072 3738115 config.go:182] Loaded profile config "force-systemd-flag-027208": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:10:24.307083 3738115 machine.go:97] duration metric: took 4.213237619s to provisionDockerMachine
	I1227 10:10:24.307090 3738115 client.go:176] duration metric: took 9.978658918s to LocalClient.Create
	I1227 10:10:24.307107 3738115 start.go:167] duration metric: took 9.978722333s to libmachine.API.Create "force-systemd-flag-027208"
	I1227 10:10:24.307114 3738115 start.go:293] postStartSetup for "force-systemd-flag-027208" (driver="docker")
	I1227 10:10:24.307122 3738115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:10:24.307178 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:10:24.307230 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.324192 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.423140 3738115 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:10:24.426587 3738115 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:10:24.426659 3738115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:10:24.426678 3738115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/addons for local assets ...
	I1227 10:10:24.426739 3738115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/files for local assets ...
	I1227 10:10:24.426819 3738115 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> 35331472.pem in /etc/ssl/certs
	I1227 10:10:24.426834 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> /etc/ssl/certs/35331472.pem
	I1227 10:10:24.426951 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:10:24.434338 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /etc/ssl/certs/35331472.pem (1708 bytes)
	I1227 10:10:24.452382 3738115 start.go:296] duration metric: took 145.254802ms for postStartSetup
	I1227 10:10:24.452762 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
	I1227 10:10:24.469668 3738115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json ...
	I1227 10:10:24.469957 3738115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:10:24.470000 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.486890 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.584309 3738115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:10:24.589316 3738115 start.go:128] duration metric: took 10.264593752s to createHost
	I1227 10:10:24.589389 3738115 start.go:83] releasing machines lock for "force-systemd-flag-027208", held for 10.264769864s
	I1227 10:10:24.589479 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
	I1227 10:10:24.607151 3738115 ssh_runner.go:195] Run: cat /version.json
	I1227 10:10:24.607216 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.607537 3738115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:10:24.607594 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.647065 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.656060 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.852332 3738115 ssh_runner.go:195] Run: systemctl --version
	I1227 10:10:24.859289 3738115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:10:24.863820 3738115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:10:24.863935 3738115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:10:24.894008 3738115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:10:24.894085 3738115 start.go:496] detecting cgroup driver to use...
	I1227 10:10:24.894113 3738115 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 10:10:24.894199 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 10:10:24.909955 3738115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 10:10:24.924610 3738115 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:10:24.924679 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:10:24.943027 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:10:24.962924 3738115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:10:25.086519 3738115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:10:25.217234 3738115 docker.go:234] disabling docker service ...
	I1227 10:10:25.217301 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:10:25.239443 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:10:25.253469 3738115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:10:25.372805 3738115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:10:25.502827 3738115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:10:25.516102 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:10:25.530490 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 10:10:25.539633 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 10:10:25.548981 3738115 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 10:10:25.549107 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 10:10:25.558292 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 10:10:25.567719 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 10:10:25.576955 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 10:10:25.586514 3738115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:10:25.594864 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 10:10:25.604220 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 10:10:25.613067 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 10:10:25.621797 3738115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:10:25.629270 3738115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:10:25.637053 3738115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:10:25.760495 3738115 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 10:10:25.897831 3738115 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 10:10:25.897957 3738115 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 10:10:25.901900 3738115 start.go:574] Will wait 60s for crictl version
	I1227 10:10:25.902037 3738115 ssh_runner.go:195] Run: which crictl
	I1227 10:10:25.905697 3738115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:10:25.930207 3738115 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 10:10:25.930328 3738115 ssh_runner.go:195] Run: containerd --version
	I1227 10:10:25.954007 3738115 ssh_runner.go:195] Run: containerd --version
	I1227 10:10:25.981733 3738115 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 10:10:25.984781 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:10:26.000934 3738115 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:10:26.006285 3738115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:10:26.018144 3738115 kubeadm.go:884] updating cluster {Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:10:26.018261 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:10:26.018337 3738115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:10:26.050904 3738115 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 10:10:26.050932 3738115 containerd.go:542] Images already preloaded, skipping extraction
	I1227 10:10:26.051019 3738115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:10:26.077679 3738115 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 10:10:26.077700 3738115 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:10:26.077708 3738115 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1227 10:10:26.077812 3738115 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-027208 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:10:26.077878 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 10:10:26.103476 3738115 cni.go:84] Creating CNI manager for ""
	I1227 10:10:26.103506 3738115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:10:26.103527 3738115 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:10:26.103551 3738115 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-027208 NodeName:force-systemd-flag-027208 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:10:26.103669 3738115 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-027208"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:10:26.103747 3738115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:10:26.115900 3738115 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:10:26.115969 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:10:26.124889 3738115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1227 10:10:26.139449 3738115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:10:26.154050 3738115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1227 10:10:26.169297 3738115 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:10:26.173915 3738115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:10:26.184920 3738115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:10:26.302987 3738115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:10:26.319342 3738115 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208 for IP: 192.168.85.2
	I1227 10:10:26.319367 3738115 certs.go:195] generating shared ca certs ...
	I1227 10:10:26.319382 3738115 certs.go:227] acquiring lock for ca certs: {Name:mk8b517b50583c7fd9315f1419472c192d2e7a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.319519 3738115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key
	I1227 10:10:26.319566 3738115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key
	I1227 10:10:26.319577 3738115 certs.go:257] generating profile certs ...
	I1227 10:10:26.319635 3738115 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key
	I1227 10:10:26.319659 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt with IP's: []
	I1227 10:10:26.459451 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt ...
	I1227 10:10:26.459481 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt: {Name:mk84501b4c3d27859a09c7a6cf2970a871461396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.459678 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key ...
	I1227 10:10:26.459696 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key: {Name:mk2ccf9cd6593ffe591c5f10566441231d2db314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.459797 3738115 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b
	I1227 10:10:26.459816 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 10:10:26.619632 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b ...
	I1227 10:10:26.619671 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b: {Name:mk45edfe96d665c299603d64f2aab60b1ce255c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.619859 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b ...
	I1227 10:10:26.619874 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b: {Name:mkbd7ed3b29ae956b5f18bf81df861e3ebc9c0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.619963 3738115 certs.go:382] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt
	I1227 10:10:26.620069 3738115 certs.go:386] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key
	I1227 10:10:26.620138 3738115 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key
	I1227 10:10:26.620158 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt with IP's: []
	I1227 10:10:27.146672 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt ...
	I1227 10:10:27.146707 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt: {Name:mkb638601bcc294803da88d5fdf89e5d664c6575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:27.146874 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key ...
	I1227 10:10:27.146889 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key: {Name:mk1275117485033a42422350e6b97f277389ec3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:27.146996 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 10:10:27.147022 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 10:10:27.147035 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 10:10:27.147053 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 10:10:27.147065 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 10:10:27.147081 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 10:10:27.147094 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 10:10:27.147106 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 10:10:27.147167 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem (1338 bytes)
	W1227 10:10:27.147209 3738115 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147_empty.pem, impossibly tiny 0 bytes
	I1227 10:10:27.147220 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:10:27.147257 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:10:27.147286 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:10:27.147309 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem (1675 bytes)
	I1227 10:10:27.147356 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem (1708 bytes)
	I1227 10:10:27.147392 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.147415 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem -> /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.147433 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.147968 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:10:27.172281 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:10:27.199732 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:10:27.218091 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:10:27.236726 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 10:10:27.255815 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:10:27.273210 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:10:27.291337 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:10:27.309854 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:10:27.327812 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem --> /usr/share/ca-certificates/3533147.pem (1338 bytes)
	I1227 10:10:27.345068 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /usr/share/ca-certificates/35331472.pem (1708 bytes)
	I1227 10:10:27.363093 3738115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:10:27.376243 3738115 ssh_runner.go:195] Run: openssl version
	I1227 10:10:27.382542 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.390045 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:10:27.397770 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.401584 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:25 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.401753 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.442821 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:10:27.450555 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:10:27.458392 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.465875 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3533147.pem /etc/ssl/certs/3533147.pem
	I1227 10:10:27.473778 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.477818 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:31 /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.477901 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.521479 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:10:27.529246 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3533147.pem /etc/ssl/certs/51391683.0
	I1227 10:10:27.537210 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.545185 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/35331472.pem /etc/ssl/certs/35331472.pem
	I1227 10:10:27.553062 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.557000 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:31 /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.557069 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.598533 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:10:27.606100 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/35331472.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:10:27.614561 3738115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:10:27.619167 3738115 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:10:27.619261 3738115 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:10:27.619373 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 10:10:27.619454 3738115 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:10:27.664708 3738115 cri.go:96] found id: ""
	I1227 10:10:27.664808 3738115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:10:27.676293 3738115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:10:27.684601 3738115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:10:27.684711 3738115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:10:27.693229 3738115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:10:27.693252 3738115 kubeadm.go:158] found existing configuration files:
	
	I1227 10:10:27.693326 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:10:27.701375 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:10:27.701465 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:10:27.709152 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:10:27.717622 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:10:27.717691 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:10:27.725649 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:10:27.733875 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:10:27.733981 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:10:27.741583 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:10:27.749413 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:10:27.749491 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:10:27.757332 3738115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:10:27.795629 3738115 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:10:27.795779 3738115 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:10:27.898250 3738115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:10:27.898344 3738115 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:10:27.898392 3738115 kubeadm.go:319] OS: Linux
	I1227 10:10:27.898440 3738115 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:10:27.898492 3738115 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:10:27.898543 3738115 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:10:27.898594 3738115 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:10:27.898647 3738115 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:10:27.898703 3738115 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:10:27.898753 3738115 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:10:27.898801 3738115 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:10:27.898850 3738115 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:10:27.969995 3738115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:10:27.970212 3738115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:10:27.970343 3738115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:10:27.975838 3738115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:10:27.982447 3738115 out.go:252]   - Generating certificates and keys ...
	I1227 10:10:27.982636 3738115 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:10:27.982764 3738115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:10:28.179272 3738115 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:10:28.301146 3738115 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:10:28.409704 3738115 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:10:28.575840 3738115 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:10:28.653265 3738115 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:10:28.653619 3738115 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:10:29.172495 3738115 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:10:29.173136 3738115 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:10:29.225627 3738115 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:10:29.920042 3738115 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:10:30.152507 3738115 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:10:30.153337 3738115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:10:30.333897 3738115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:10:30.680029 3738115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:10:30.828481 3738115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:10:30.943020 3738115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:10:31.110010 3738115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:10:31.110883 3738115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:10:31.114899 3738115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:10:31.121179 3738115 out.go:252]   - Booting up control plane ...
	I1227 10:10:31.121296 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:10:31.121382 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:10:31.121448 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:10:31.138571 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:10:31.139005 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:10:31.146921 3738115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:10:31.147313 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:10:31.147361 3738115 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:10:31.282879 3738115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:10:31.283057 3738115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:14:31.283323 3738115 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000806423s
	I1227 10:14:31.283368 3738115 kubeadm.go:319] 
	I1227 10:14:31.283433 3738115 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:14:31.283471 3738115 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:14:31.283588 3738115 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:14:31.283597 3738115 kubeadm.go:319] 
	I1227 10:14:31.283713 3738115 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:14:31.283748 3738115 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:14:31.283785 3738115 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:14:31.283793 3738115 kubeadm.go:319] 
	I1227 10:14:31.288767 3738115 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:14:31.289201 3738115 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:14:31.289312 3738115 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:14:31.289547 3738115 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:14:31.289553 3738115 kubeadm.go:319] 
	I1227 10:14:31.289621 3738115 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:14:31.289742 3738115 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000806423s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000806423s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:14:31.289819 3738115 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1227 10:14:31.750300 3738115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:14:31.772656 3738115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:14:31.772727 3738115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:14:31.782758 3738115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:14:31.782780 3738115 kubeadm.go:158] found existing configuration files:
	
	I1227 10:14:31.782857 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:14:31.796759 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:14:31.796822 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:14:31.811896 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:14:31.822736 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:14:31.822833 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:14:31.836527 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:14:31.851023 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:14:31.851095 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:14:31.869098 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:14:31.878981 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:14:31.879053 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:14:31.887412 3738115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:14:31.953162 3738115 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:14:31.953584 3738115 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:14:32.062092 3738115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:14:32.062172 3738115 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:14:32.062210 3738115 kubeadm.go:319] OS: Linux
	I1227 10:14:32.062262 3738115 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:14:32.062316 3738115 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:14:32.062368 3738115 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:14:32.062420 3738115 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:14:32.062472 3738115 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:14:32.062524 3738115 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:14:32.062577 3738115 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:14:32.062630 3738115 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:14:32.062681 3738115 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:14:32.194241 3738115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:14:32.194360 3738115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:14:32.194458 3738115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:14:32.211795 3738115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:14:32.217418 3738115 out.go:252]   - Generating certificates and keys ...
	I1227 10:14:32.217527 3738115 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:14:32.217602 3738115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:14:32.217685 3738115 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:14:32.217751 3738115 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:14:32.217828 3738115 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:14:32.217893 3738115 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:14:32.217964 3738115 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:14:32.218035 3738115 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:14:32.218115 3738115 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:14:32.218192 3738115 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:14:32.218234 3738115 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:14:32.218297 3738115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:14:32.311471 3738115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:14:32.630411 3738115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:14:32.960523 3738115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:14:33.272670 3738115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:14:33.470189 3738115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:14:33.471343 3738115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:14:33.474434 3738115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:14:33.476605 3738115 out.go:252]   - Booting up control plane ...
	I1227 10:14:33.476727 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:14:33.478244 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:14:33.479797 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:14:33.509188 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:14:33.510062 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:14:33.526391 3738115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:14:33.526791 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:14:33.531788 3738115 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:14:33.783393 3738115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:14:33.783515 3738115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:18:33.783343 3738115 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000245188s
	I1227 10:18:33.783607 3738115 kubeadm.go:319] 
	I1227 10:18:33.783674 3738115 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:18:33.783709 3738115 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:18:33.783814 3738115 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:18:33.783820 3738115 kubeadm.go:319] 
	I1227 10:18:33.783924 3738115 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:18:33.783956 3738115 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:18:33.783987 3738115 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:18:33.783992 3738115 kubeadm.go:319] 
	I1227 10:18:33.788224 3738115 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:18:33.788670 3738115 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:18:33.788796 3738115 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:18:33.789043 3738115 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:18:33.789054 3738115 kubeadm.go:319] 
	I1227 10:18:33.789122 3738115 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:18:33.789185 3738115 kubeadm.go:403] duration metric: took 8m6.169929895s to StartCluster
	I1227 10:18:33.789236 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:18:33.789303 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:18:33.814197 3738115 cri.go:96] found id: ""
	I1227 10:18:33.814236 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.814245 3738115 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:18:33.814252 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 10:18:33.814314 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:18:33.839019 3738115 cri.go:96] found id: ""
	I1227 10:18:33.839043 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.839051 3738115 logs.go:284] No container was found matching "etcd"
	I1227 10:18:33.839058 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 10:18:33.839114 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:18:33.876385 3738115 cri.go:96] found id: ""
	I1227 10:18:33.876414 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.876427 3738115 logs.go:284] No container was found matching "coredns"
	I1227 10:18:33.876433 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:18:33.876491 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:18:33.906761 3738115 cri.go:96] found id: ""
	I1227 10:18:33.906788 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.906797 3738115 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:18:33.906803 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:18:33.906864 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:18:33.935959 3738115 cri.go:96] found id: ""
	I1227 10:18:33.935985 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.935994 3738115 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:18:33.936000 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:18:33.936056 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:18:33.960107 3738115 cri.go:96] found id: ""
	I1227 10:18:33.960131 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.960143 3738115 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:18:33.960149 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 10:18:33.960236 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:18:33.989273 3738115 cri.go:96] found id: ""
	I1227 10:18:33.989300 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.989310 3738115 logs.go:284] No container was found matching "kindnet"
	I1227 10:18:33.989356 3738115 logs.go:123] Gathering logs for containerd ...
	I1227 10:18:33.989378 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 10:18:34.028316 3738115 logs.go:123] Gathering logs for container status ...
	I1227 10:18:34.028366 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 10:18:34.063676 3738115 logs.go:123] Gathering logs for kubelet ...
	I1227 10:18:34.063759 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:18:34.124368 3738115 logs.go:123] Gathering logs for dmesg ...
	I1227 10:18:34.124411 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:18:34.139149 3738115 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:18:34.139179 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:18:34.206064 3738115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:18:34.197603    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.198405    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.199906    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.200420    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.202145    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:18:34.197603    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.198405    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.199906    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.200420    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.202145    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1227 10:18:34.206090 3738115 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:18:34.206211 3738115 out.go:285] * 
	* 
	W1227 10:18:34.206276 3738115 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:18:34.206296 3738115 out.go:285] * 
	* 
	W1227 10:18:34.206569 3738115 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:18:34.211278 3738115 out.go:203] 
	W1227 10:18:34.214075 3738115 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:18:34.214127 3738115 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:18:34.214153 3738115 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:18:34.217184 3738115 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-027208 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-027208 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 10:18:34.567922749 +0000 UTC m=+3202.359074628
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-027208
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-027208:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b",
	        "Created": "2025-12-27T10:10:18.999708699Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3738542,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:10:19.066236146Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b/hosts",
	        "LogPath": "/var/lib/docker/containers/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b/e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b-json.log",
	        "Name": "/force-systemd-flag-027208",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-027208:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-027208",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e0e73dc04b0aab4fcdaf361898cc0383e1c5547634bc16b5ba684ed19c69705b",
	                "LowerDir": "/var/lib/docker/overlay2/487251bacee39f8118a04e5796b9b85b7cd708351cfbf4db499ea57c5de16418-init/diff:/var/lib/docker/overlay2/2db3190b649abc62a8f6b3256c95cbe4767892923c34d4bdea0f0debaf7248d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/487251bacee39f8118a04e5796b9b85b7cd708351cfbf4db499ea57c5de16418/merged",
	                "UpperDir": "/var/lib/docker/overlay2/487251bacee39f8118a04e5796b9b85b7cd708351cfbf4db499ea57c5de16418/diff",
	                "WorkDir": "/var/lib/docker/overlay2/487251bacee39f8118a04e5796b9b85b7cd708351cfbf4db499ea57c5de16418/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-027208",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-027208/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-027208",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-027208",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-027208",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd507a1d5818f6a99652133c3576c3adb743bed95d49e7435c3a3d4c86b89892",
	            "SandboxKey": "/var/run/docker/netns/dd507a1d5818",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36225"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36226"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36229"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36227"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36228"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-027208": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:10:2c:cc:6e:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c217f0350b8b4e4e3b94001dd4b74a8853abe60a63cc91a348daffa0221690e1",
	                    "EndpointID": "95337a0b7d2da67cb6b1113e7d25e2701a2473a726ceb586fd39a62636c2c6f1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-027208",
	                        "e0e73dc04b0a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-027208 -n force-systemd-flag-027208
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-027208 -n force-systemd-flag-027208: exit status 6 (327.260865ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:18:34.897989 3767160 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-027208" does not appear in /home/jenkins/minikube-integration/22343-3531265/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-027208 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ delete  │ -p cert-options-838902                                                                                                                                                                                                                              │ cert-options-838902       │ jenkins │ v1.37.0 │ 27 Dec 25 10:12 UTC │ 27 Dec 25 10:12 UTC │
	│ start   │ -p old-k8s-version-429745 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:12 UTC │ 27 Dec 25 10:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-429745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:14 UTC │ 27 Dec 25 10:14 UTC │
	│ stop    │ -p old-k8s-version-429745 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:14 UTC │ 27 Dec 25 10:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-429745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:14 UTC │ 27 Dec 25 10:14 UTC │
	│ start   │ -p old-k8s-version-429745 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:14 UTC │ 27 Dec 25 10:14 UTC │
	│ image   │ old-k8s-version-429745 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
	│ pause   │ -p old-k8s-version-429745 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
	│ unpause │ -p old-k8s-version-429745 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
	│ delete  │ -p old-k8s-version-429745                                                                                                                                                                                                                           │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
	│ delete  │ -p old-k8s-version-429745                                                                                                                                                                                                                           │ old-k8s-version-429745    │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
	│ start   │ -p no-preload-878202 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:15 UTC │ 27 Dec 25 10:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-878202 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
	│ stop    │ -p no-preload-878202 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
	│ addons  │ enable dashboard -p no-preload-878202 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:16 UTC │
	│ start   │ -p no-preload-878202 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:16 UTC │ 27 Dec 25 10:17 UTC │
	│ image   │ no-preload-878202 image list --format=json                                                                                                                                                                                                          │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
	│ pause   │ -p no-preload-878202 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
	│ unpause │ -p no-preload-878202 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
	│ delete  │ -p no-preload-878202                                                                                                                                                                                                                                │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
	│ delete  │ -p no-preload-878202                                                                                                                                                                                                                                │ no-preload-878202         │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:17 UTC │
	│ start   │ -p embed-certs-161350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-161350        │ jenkins │ v1.37.0 │ 27 Dec 25 10:17 UTC │ 27 Dec 25 10:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-161350 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-161350        │ jenkins │ v1.37.0 │ 27 Dec 25 10:18 UTC │ 27 Dec 25 10:18 UTC │
	│ stop    │ -p embed-certs-161350 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-161350        │ jenkins │ v1.37.0 │ 27 Dec 25 10:18 UTC │                     │
	│ ssh     │ force-systemd-flag-027208 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-027208 │ jenkins │ v1.37.0 │ 27 Dec 25 10:18 UTC │ 27 Dec 25 10:18 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:17:30
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:17:30.347749 3763056 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:17:30.347947 3763056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:17:30.347979 3763056 out.go:374] Setting ErrFile to fd 2...
	I1227 10:17:30.348002 3763056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:17:30.348425 3763056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 10:17:30.349032 3763056 out.go:368] Setting JSON to false
	I1227 10:17:30.349964 3763056 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":57603,"bootTime":1766773048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 10:17:30.350104 3763056 start.go:143] virtualization:  
	I1227 10:17:30.354253 3763056 out.go:179] * [embed-certs-161350] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:17:30.358682 3763056 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:17:30.358815 3763056 notify.go:221] Checking for updates...
	I1227 10:17:30.365184 3763056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:17:30.368285 3763056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 10:17:30.371384 3763056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 10:17:30.374544 3763056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:17:30.377586 3763056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:17:30.381113 3763056 config.go:182] Loaded profile config "force-systemd-flag-027208": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:17:30.381223 3763056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:17:30.413273 3763056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:17:30.413402 3763056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:17:30.473821 3763056 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:17:30.464408377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:17:30.473925 3763056 docker.go:319] overlay module found
	I1227 10:17:30.477152 3763056 out.go:179] * Using the docker driver based on user configuration
	I1227 10:17:30.480028 3763056 start.go:309] selected driver: docker
	I1227 10:17:30.480063 3763056 start.go:928] validating driver "docker" against <nil>
	I1227 10:17:30.480078 3763056 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:17:30.480833 3763056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:17:30.539494 3763056 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:17:30.530333266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:17:30.539647 3763056 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:17:30.539874 3763056 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:17:30.543026 3763056 out.go:179] * Using Docker driver with root privileges
	I1227 10:17:30.546000 3763056 cni.go:84] Creating CNI manager for ""
	I1227 10:17:30.546080 3763056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:17:30.546095 3763056 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:17:30.546164 3763056 start.go:353] cluster config:
	{Name:embed-certs-161350 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:17:30.549368 3763056 out.go:179] * Starting "embed-certs-161350" primary control-plane node in "embed-certs-161350" cluster
	I1227 10:17:30.552328 3763056 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 10:17:30.555306 3763056 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:17:30.558166 3763056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:17:30.558219 3763056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 10:17:30.558234 3763056 cache.go:65] Caching tarball of preloaded images
	I1227 10:17:30.558239 3763056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:17:30.558317 3763056 preload.go:251] Found /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 10:17:30.558327 3763056 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 10:17:30.558444 3763056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/config.json ...
	I1227 10:17:30.558461 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/config.json: {Name:mkeb2d24ed7cd78ac4b9966b3f4e0b1888680eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:17:30.580454 3763056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:17:30.580480 3763056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:17:30.580496 3763056 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:17:30.580529 3763056 start.go:360] acquireMachinesLock for embed-certs-161350: {Name:mk5eca3f0e9c960c00971a61d3c4e9d0151a24a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:17:30.580648 3763056 start.go:364] duration metric: took 99.739µs to acquireMachinesLock for "embed-certs-161350"
	I1227 10:17:30.580680 3763056 start.go:93] Provisioning new machine with config: &{Name:embed-certs-161350 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 10:17:30.580759 3763056 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:17:30.584200 3763056 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:17:30.584464 3763056 start.go:159] libmachine.API.Create for "embed-certs-161350" (driver="docker")
	I1227 10:17:30.584504 3763056 client.go:173] LocalClient.Create starting
	I1227 10:17:30.584581 3763056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem
	I1227 10:17:30.584626 3763056 main.go:144] libmachine: Decoding PEM data...
	I1227 10:17:30.584644 3763056 main.go:144] libmachine: Parsing certificate...
	I1227 10:17:30.584700 3763056 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem
	I1227 10:17:30.584721 3763056 main.go:144] libmachine: Decoding PEM data...
	I1227 10:17:30.584732 3763056 main.go:144] libmachine: Parsing certificate...
	I1227 10:17:30.585149 3763056 cli_runner.go:164] Run: docker network inspect embed-certs-161350 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:17:30.601724 3763056 cli_runner.go:211] docker network inspect embed-certs-161350 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:17:30.601827 3763056 network_create.go:284] running [docker network inspect embed-certs-161350] to gather additional debugging logs...
	I1227 10:17:30.601849 3763056 cli_runner.go:164] Run: docker network inspect embed-certs-161350
	W1227 10:17:30.620675 3763056 cli_runner.go:211] docker network inspect embed-certs-161350 returned with exit code 1
	I1227 10:17:30.620711 3763056 network_create.go:287] error running [docker network inspect embed-certs-161350]: docker network inspect embed-certs-161350: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-161350 not found
	I1227 10:17:30.620725 3763056 network_create.go:289] output of [docker network inspect embed-certs-161350]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-161350 not found
	
	** /stderr **
	I1227 10:17:30.620831 3763056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:17:30.639239 3763056 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d8712ba8a9f7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9e:f2:5a:61:6a:4e} reservation:<nil>}
	I1227 10:17:30.639604 3763056 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-43ae11d059eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:6d:b0:96:78:2a} reservation:<nil>}
	I1227 10:17:30.639941 3763056 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c4bd1426b4b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:5d:63:1e:36:ed} reservation:<nil>}
	I1227 10:17:30.640396 3763056 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a0d30}
	I1227 10:17:30.640420 3763056 network_create.go:124] attempt to create docker network embed-certs-161350 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:17:30.640476 3763056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-161350 embed-certs-161350
	I1227 10:17:30.708947 3763056 network_create.go:108] docker network embed-certs-161350 192.168.76.0/24 created
	I1227 10:17:30.708975 3763056 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-161350" container
	I1227 10:17:30.709050 3763056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:17:30.725619 3763056 cli_runner.go:164] Run: docker volume create embed-certs-161350 --label name.minikube.sigs.k8s.io=embed-certs-161350 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:17:30.744664 3763056 oci.go:103] Successfully created a docker volume embed-certs-161350
	I1227 10:17:30.744768 3763056 cli_runner.go:164] Run: docker run --rm --name embed-certs-161350-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-161350 --entrypoint /usr/bin/test -v embed-certs-161350:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:17:31.298923 3763056 oci.go:107] Successfully prepared a docker volume embed-certs-161350
	I1227 10:17:31.299017 3763056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:17:31.299028 3763056 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:17:31.299102 3763056 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-161350:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:17:35.182598 3763056 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-161350:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.883439274s)
	I1227 10:17:35.182634 3763056 kic.go:203] duration metric: took 3.883603076s to extract preloaded images to volume ...
	W1227 10:17:35.182761 3763056 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:17:35.182889 3763056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:17:35.237496 3763056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-161350 --name embed-certs-161350 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-161350 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-161350 --network embed-certs-161350 --ip 192.168.76.2 --volume embed-certs-161350:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:17:35.545813 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Running}}
	I1227 10:17:35.569873 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
	I1227 10:17:35.593009 3763056 cli_runner.go:164] Run: docker exec embed-certs-161350 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:17:35.646142 3763056 oci.go:144] the created container "embed-certs-161350" has a running status.
	I1227 10:17:35.646169 3763056 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa...
	I1227 10:17:35.832430 3763056 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:17:35.861361 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
	I1227 10:17:35.890340 3763056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:17:35.890375 3763056 kic_runner.go:114] Args: [docker exec --privileged embed-certs-161350 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:17:35.952579 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
	I1227 10:17:35.977800 3763056 machine.go:94] provisionDockerMachine start ...
	I1227 10:17:35.977899 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:17:36.006612 3763056 main.go:144] libmachine: Using SSH client type: native
	I1227 10:17:36.007015 3763056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36255 <nil> <nil>}
	I1227 10:17:36.007028 3763056 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:17:36.007909 3763056 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 10:17:39.150696 3763056 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-161350
	
	I1227 10:17:39.150719 3763056 ubuntu.go:182] provisioning hostname "embed-certs-161350"
	I1227 10:17:39.150784 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:17:39.169295 3763056 main.go:144] libmachine: Using SSH client type: native
	I1227 10:17:39.169621 3763056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36255 <nil> <nil>}
	I1227 10:17:39.169637 3763056 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-161350 && echo "embed-certs-161350" | sudo tee /etc/hostname
	I1227 10:17:39.316441 3763056 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-161350
	
	I1227 10:17:39.316528 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:17:39.334187 3763056 main.go:144] libmachine: Using SSH client type: native
	I1227 10:17:39.334505 3763056 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36255 <nil> <nil>}
	I1227 10:17:39.334529 3763056 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-161350' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-161350/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-161350' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:17:39.475324 3763056 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:17:39.475348 3763056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-3531265/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-3531265/.minikube}
	I1227 10:17:39.475367 3763056 ubuntu.go:190] setting up certificates
	I1227 10:17:39.475377 3763056 provision.go:84] configureAuth start
	I1227 10:17:39.475438 3763056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-161350
	I1227 10:17:39.492280 3763056 provision.go:143] copyHostCerts
	I1227 10:17:39.492372 3763056 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem, removing ...
	I1227 10:17:39.492388 3763056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
	I1227 10:17:39.492471 3763056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem (1082 bytes)
	I1227 10:17:39.492566 3763056 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem, removing ...
	I1227 10:17:39.492575 3763056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
	I1227 10:17:39.492602 3763056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem (1123 bytes)
	I1227 10:17:39.492659 3763056 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem, removing ...
	I1227 10:17:39.492669 3763056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
	I1227 10:17:39.492692 3763056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem (1675 bytes)
	I1227 10:17:39.492751 3763056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem org=jenkins.embed-certs-161350 san=[127.0.0.1 192.168.76.2 embed-certs-161350 localhost minikube]
	I1227 10:17:39.611352 3763056 provision.go:177] copyRemoteCerts
	I1227 10:17:39.611420 3763056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:17:39.611463 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:17:39.629949 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
	I1227 10:17:39.735261 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:17:39.754155 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1227 10:17:39.773374 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:17:39.792319 3763056 provision.go:87] duration metric: took 316.908283ms to configureAuth
	I1227 10:17:39.792403 3763056 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:17:39.792651 3763056 config.go:182] Loaded profile config "embed-certs-161350": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:17:39.792665 3763056 machine.go:97] duration metric: took 3.814848167s to provisionDockerMachine
	I1227 10:17:39.792679 3763056 client.go:176] duration metric: took 9.20815998s to LocalClient.Create
	I1227 10:17:39.792702 3763056 start.go:167] duration metric: took 9.208240034s to libmachine.API.Create "embed-certs-161350"
	I1227 10:17:39.792710 3763056 start.go:293] postStartSetup for "embed-certs-161350" (driver="docker")
	I1227 10:17:39.792724 3763056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:17:39.792777 3763056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:17:39.792828 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:17:39.810832 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
	I1227 10:17:39.914856 3763056 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:17:39.918159 3763056 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:17:39.918190 3763056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:17:39.918218 3763056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/addons for local assets ...
	I1227 10:17:39.918282 3763056 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/files for local assets ...
	I1227 10:17:39.918405 3763056 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> 35331472.pem in /etc/ssl/certs
	I1227 10:17:39.918524 3763056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:17:39.926178 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /etc/ssl/certs/35331472.pem (1708 bytes)
	I1227 10:17:39.943926 3763056 start.go:296] duration metric: took 151.196581ms for postStartSetup
	I1227 10:17:39.944325 3763056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-161350
	I1227 10:17:39.969361 3763056 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/config.json ...
	I1227 10:17:39.969645 3763056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:17:39.969698 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:17:39.987648 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
	I1227 10:17:40.096457 3763056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:17:40.101581 3763056 start.go:128] duration metric: took 9.520806883s to createHost
	I1227 10:17:40.101606 3763056 start.go:83] releasing machines lock for "embed-certs-161350", held for 9.520943511s
	I1227 10:17:40.101695 3763056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-161350
	I1227 10:17:40.120068 3763056 ssh_runner.go:195] Run: cat /version.json
	I1227 10:17:40.120130 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:17:40.120429 3763056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:17:40.120492 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:17:40.142048 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
	I1227 10:17:40.151472 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
	I1227 10:17:40.242627 3763056 ssh_runner.go:195] Run: systemctl --version
	I1227 10:17:40.331198 3763056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:17:40.335662 3763056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:17:40.335758 3763056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:17:40.365290 3763056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:17:40.365323 3763056 start.go:496] detecting cgroup driver to use...
	I1227 10:17:40.365358 3763056 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1227 10:17:40.365423 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 10:17:40.387292 3763056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 10:17:40.402255 3763056 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:17:40.402321 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:17:40.420915 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:17:40.440045 3763056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:17:40.557834 3763056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:17:40.683121 3763056 docker.go:234] disabling docker service ...
	I1227 10:17:40.683248 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:17:40.706381 3763056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:17:40.720733 3763056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:17:40.843264 3763056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:17:40.962442 3763056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:17:40.975424 3763056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:17:40.989978 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 10:17:40.999123 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 10:17:41.009942 3763056 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I1227 10:17:41.010013 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1227 10:17:41.019526 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 10:17:41.028794 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 10:17:41.037543 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 10:17:41.046669 3763056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:17:41.055500 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 10:17:41.065018 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 10:17:41.074377 3763056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 10:17:41.083589 3763056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:17:41.091802 3763056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:17:41.099594 3763056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:17:41.238093 3763056 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 10:17:41.376092 3763056 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 10:17:41.376167 3763056 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 10:17:41.380404 3763056 start.go:574] Will wait 60s for crictl version
	I1227 10:17:41.380470 3763056 ssh_runner.go:195] Run: which crictl
	I1227 10:17:41.384211 3763056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:17:41.408620 3763056 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 10:17:41.408689 3763056 ssh_runner.go:195] Run: containerd --version
	I1227 10:17:41.428129 3763056 ssh_runner.go:195] Run: containerd --version
	I1227 10:17:41.452537 3763056 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 10:17:41.455600 3763056 cli_runner.go:164] Run: docker network inspect embed-certs-161350 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:17:41.472023 3763056 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:17:41.475960 3763056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:17:41.486181 3763056 kubeadm.go:884] updating cluster {Name:embed-certs-161350 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:17:41.486314 3763056 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:17:41.486383 3763056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:17:41.511121 3763056 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 10:17:41.511147 3763056 containerd.go:542] Images already preloaded, skipping extraction
	I1227 10:17:41.511211 3763056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:17:41.539145 3763056 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 10:17:41.539169 3763056 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:17:41.539177 3763056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1227 10:17:41.539266 3763056 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-161350 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:17:41.539337 3763056 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 10:17:41.565095 3763056 cni.go:84] Creating CNI manager for ""
	I1227 10:17:41.565120 3763056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:17:41.565138 3763056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:17:41.565161 3763056 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-161350 NodeName:embed-certs-161350 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:17:41.565283 3763056 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-161350"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:17:41.565371 3763056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:17:41.573458 3763056 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:17:41.573530 3763056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:17:41.581416 3763056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1227 10:17:41.594392 3763056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:17:41.607827 3763056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
	I1227 10:17:41.620708 3763056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:17:41.624397 3763056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:17:41.634154 3763056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:17:41.745796 3763056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:17:41.763235 3763056 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350 for IP: 192.168.76.2
	I1227 10:17:41.763259 3763056 certs.go:195] generating shared ca certs ...
	I1227 10:17:41.763274 3763056 certs.go:227] acquiring lock for ca certs: {Name:mk8b517b50583c7fd9315f1419472c192d2e7a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:17:41.763428 3763056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key
	I1227 10:17:41.763493 3763056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key
	I1227 10:17:41.763506 3763056 certs.go:257] generating profile certs ...
	I1227 10:17:41.763579 3763056 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.key
	I1227 10:17:41.763595 3763056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.crt with IP's: []
	I1227 10:17:42.280574 3763056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.crt ...
	I1227 10:17:42.280617 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.crt: {Name:mk8667852cc806cc3165c03c25c3a212a68f8de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:17:42.280860 3763056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.key ...
	I1227 10:17:42.280877 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/client.key: {Name:mkb10810d7a2b7f61b39f4261e8426c92f955a06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:17:42.280987 3763056 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key.45c8ab8d
	I1227 10:17:42.281010 3763056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt.45c8ab8d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:17:42.434583 3763056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt.45c8ab8d ...
	I1227 10:17:42.434616 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt.45c8ab8d: {Name:mk25aa3cc165e5dd0e3336aee06656ae79b623b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:17:42.434802 3763056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key.45c8ab8d ...
	I1227 10:17:42.434819 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key.45c8ab8d: {Name:mk19869ca165d1e9be82068dd967222d69549cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:17:42.434918 3763056 certs.go:382] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt.45c8ab8d -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt
	I1227 10:17:42.435015 3763056 certs.go:386] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key.45c8ab8d -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key
	I1227 10:17:42.435076 3763056 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.key
	I1227 10:17:42.435095 3763056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.crt with IP's: []
	I1227 10:17:43.180975 3763056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.crt ...
	I1227 10:17:43.181014 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.crt: {Name:mk6c432d3dba85ffdb00efb19ccf25436337b3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:17:43.181220 3763056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.key ...
	I1227 10:17:43.181238 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.key: {Name:mk02f27db2357d7ab70a1eb701b073ee8b3df705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:17:43.181445 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem (1338 bytes)
	W1227 10:17:43.181495 3763056 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147_empty.pem, impossibly tiny 0 bytes
	I1227 10:17:43.181508 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:17:43.181535 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:17:43.181565 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:17:43.181594 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem (1675 bytes)
	I1227 10:17:43.181642 3763056 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem (1708 bytes)
	I1227 10:17:43.182266 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:17:43.202786 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:17:43.222708 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:17:43.241462 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:17:43.260144 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1227 10:17:43.278243 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:17:43.296663 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:17:43.314521 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/embed-certs-161350/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:17:43.332147 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem --> /usr/share/ca-certificates/3533147.pem (1338 bytes)
	I1227 10:17:43.349931 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /usr/share/ca-certificates/35331472.pem (1708 bytes)
	I1227 10:17:43.371343 3763056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:17:43.390696 3763056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:17:43.405553 3763056 ssh_runner.go:195] Run: openssl version
	I1227 10:17:43.412623 3763056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/35331472.pem
	I1227 10:17:43.420280 3763056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/35331472.pem /etc/ssl/certs/35331472.pem
	I1227 10:17:43.427885 3763056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35331472.pem
	I1227 10:17:43.431482 3763056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:31 /usr/share/ca-certificates/35331472.pem
	I1227 10:17:43.431544 3763056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35331472.pem
	I1227 10:17:43.474604 3763056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:17:43.482052 3763056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/35331472.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:17:43.489207 3763056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:17:43.496409 3763056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:17:43.503991 3763056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:17:43.508047 3763056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:25 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:17:43.508111 3763056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:17:43.551068 3763056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:17:43.558603 3763056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:17:43.566114 3763056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3533147.pem
	I1227 10:17:43.574370 3763056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3533147.pem /etc/ssl/certs/3533147.pem
	I1227 10:17:43.582359 3763056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3533147.pem
	I1227 10:17:43.586190 3763056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:31 /usr/share/ca-certificates/3533147.pem
	I1227 10:17:43.586266 3763056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3533147.pem
	I1227 10:17:43.627776 3763056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:17:43.635555 3763056 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3533147.pem /etc/ssl/certs/51391683.0
	I1227 10:17:43.643226 3763056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:17:43.647640 3763056 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:17:43.647743 3763056 kubeadm.go:401] StartCluster: {Name:embed-certs-161350 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-161350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:17:43.647893 3763056 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 10:17:43.647987 3763056 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:17:43.680131 3763056 cri.go:96] found id: ""
	I1227 10:17:43.680249 3763056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:17:43.688284 3763056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:17:43.698322 3763056 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:17:43.698415 3763056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:17:43.706536 3763056 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:17:43.706565 3763056 kubeadm.go:158] found existing configuration files:
	
	I1227 10:17:43.706618 3763056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:17:43.714465 3763056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:17:43.714539 3763056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:17:43.722341 3763056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:17:43.730458 3763056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:17:43.730544 3763056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:17:43.738522 3763056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:17:43.746458 3763056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:17:43.746578 3763056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:17:43.754004 3763056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:17:43.761683 3763056 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:17:43.761754 3763056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:17:43.769356 3763056 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:17:43.809938 3763056 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:17:43.810004 3763056 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:17:43.888039 3763056 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:17:43.888115 3763056 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:17:43.888161 3763056 kubeadm.go:319] OS: Linux
	I1227 10:17:43.888209 3763056 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:17:43.888258 3763056 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:17:43.888308 3763056 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:17:43.888357 3763056 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:17:43.888417 3763056 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:17:43.888467 3763056 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:17:43.888513 3763056 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:17:43.888563 3763056 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:17:43.888611 3763056 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:17:43.955360 3763056 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:17:43.955477 3763056 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:17:43.955574 3763056 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:17:43.961245 3763056 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:17:43.968297 3763056 out.go:252]   - Generating certificates and keys ...
	I1227 10:17:43.968464 3763056 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:17:43.968571 3763056 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:17:44.151759 3763056 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:17:44.437427 3763056 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:17:45.114605 3763056 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:17:45.275788 3763056 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:17:45.342233 3763056 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:17:45.342401 3763056 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-161350 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:17:45.566891 3763056 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:17:45.567053 3763056 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-161350 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:17:45.841396 3763056 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:17:46.006212 3763056 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:17:46.378329 3763056 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:17:46.378873 3763056 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:17:46.800316 3763056 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:17:46.925551 3763056 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:17:47.032542 3763056 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:17:47.520482 3763056 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:17:47.810676 3763056 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:17:47.811446 3763056 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:17:47.814237 3763056 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:17:47.818038 3763056 out.go:252]   - Booting up control plane ...
	I1227 10:17:47.818154 3763056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:17:47.818245 3763056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:17:47.818318 3763056 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:17:47.843318 3763056 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:17:47.843477 3763056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:17:47.851356 3763056 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:17:47.851462 3763056 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:17:47.851526 3763056 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:17:47.979462 3763056 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:17:47.979582 3763056 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:17:48.477634 3763056 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.78878ms
	I1227 10:17:48.481456 3763056 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1227 10:17:48.481548 3763056 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1227 10:17:48.481632 3763056 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1227 10:17:48.481706 3763056 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1227 10:17:50.990744 3763056 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.508846054s
	I1227 10:17:52.462168 3763056 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.980663632s
	I1227 10:17:54.483177 3763056 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001496484s
	I1227 10:17:54.522743 3763056 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1227 10:17:54.536550 3763056 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1227 10:17:54.554512 3763056 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1227 10:17:54.554751 3763056 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-161350 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1227 10:17:54.566778 3763056 kubeadm.go:319] [bootstrap-token] Using token: y9hbid.csa875mwzjt6ay1x
	I1227 10:17:54.569703 3763056 out.go:252]   - Configuring RBAC rules ...
	I1227 10:17:54.569838 3763056 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1227 10:17:54.573701 3763056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1227 10:17:54.582852 3763056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1227 10:17:54.589360 3763056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1227 10:17:54.593518 3763056 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1227 10:17:54.597707 3763056 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1227 10:17:54.890327 3763056 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1227 10:17:55.319638 3763056 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1227 10:17:55.890567 3763056 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1227 10:17:55.891775 3763056 kubeadm.go:319] 
	I1227 10:17:55.891850 3763056 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1227 10:17:55.891860 3763056 kubeadm.go:319] 
	I1227 10:17:55.891934 3763056 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1227 10:17:55.891940 3763056 kubeadm.go:319] 
	I1227 10:17:55.891964 3763056 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1227 10:17:55.892027 3763056 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1227 10:17:55.892080 3763056 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1227 10:17:55.892087 3763056 kubeadm.go:319] 
	I1227 10:17:55.892139 3763056 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1227 10:17:55.892148 3763056 kubeadm.go:319] 
	I1227 10:17:55.892199 3763056 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1227 10:17:55.892208 3763056 kubeadm.go:319] 
	I1227 10:17:55.892264 3763056 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1227 10:17:55.892346 3763056 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1227 10:17:55.892414 3763056 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1227 10:17:55.892422 3763056 kubeadm.go:319] 
	I1227 10:17:55.892501 3763056 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1227 10:17:55.892582 3763056 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1227 10:17:55.892591 3763056 kubeadm.go:319] 
	I1227 10:17:55.892670 3763056 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token y9hbid.csa875mwzjt6ay1x \
	I1227 10:17:55.892769 3763056 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:847679729b653704be851a5daf5af83009c664cd52aa150e19612857eea3005b \
	I1227 10:17:55.892792 3763056 kubeadm.go:319] 	--control-plane 
	I1227 10:17:55.892800 3763056 kubeadm.go:319] 
	I1227 10:17:55.892880 3763056 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1227 10:17:55.892887 3763056 kubeadm.go:319] 
	I1227 10:17:55.892964 3763056 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y9hbid.csa875mwzjt6ay1x \
	I1227 10:17:55.893064 3763056 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:847679729b653704be851a5daf5af83009c664cd52aa150e19612857eea3005b 
	I1227 10:17:55.896467 3763056 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:17:55.896884 3763056 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:17:55.896997 3763056 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:17:55.897014 3763056 cni.go:84] Creating CNI manager for ""
	I1227 10:17:55.897025 3763056 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:17:55.900100 3763056 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1227 10:17:55.903019 3763056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1227 10:17:55.907085 3763056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I1227 10:17:55.907102 3763056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1227 10:17:55.922216 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1227 10:17:56.210219 3763056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1227 10:17:56.210355 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:17:56.210430 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-161350 minikube.k8s.io/updated_at=2025_12_27T10_17_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d12708197eb5984f26340287ced0d25367967e8 minikube.k8s.io/name=embed-certs-161350 minikube.k8s.io/primary=true
	I1227 10:17:56.462262 3763056 ops.go:34] apiserver oom_adj: -16
	I1227 10:17:56.462387 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:17:56.962490 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:17:57.463417 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:17:57.962540 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:17:58.463122 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:17:58.963368 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:17:59.463030 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:17:59.962667 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:18:00.463349 3763056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1227 10:18:00.565700 3763056 kubeadm.go:1114] duration metric: took 4.355394718s to wait for elevateKubeSystemPrivileges
	I1227 10:18:00.565730 3763056 kubeadm.go:403] duration metric: took 16.917992409s to StartCluster
	I1227 10:18:00.565748 3763056 settings.go:142] acquiring lock: {Name:mk370c624a4706fdf792a8bb308be4364bde23af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:18:00.565823 3763056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 10:18:00.566814 3763056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/kubeconfig: {Name:mkc7143ac5be1b7104ba62728484394431aded08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:18:00.567070 3763056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 10:18:00.567176 3763056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1227 10:18:00.567446 3763056 config.go:182] Loaded profile config "embed-certs-161350": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:18:00.567496 3763056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1227 10:18:00.567554 3763056 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-161350"
	I1227 10:18:00.567569 3763056 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-161350"
	I1227 10:18:00.567592 3763056 host.go:66] Checking if "embed-certs-161350" exists ...
	I1227 10:18:00.568160 3763056 addons.go:70] Setting default-storageclass=true in profile "embed-certs-161350"
	I1227 10:18:00.568206 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
	I1227 10:18:00.568214 3763056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-161350"
	I1227 10:18:00.568561 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
	I1227 10:18:00.572294 3763056 out.go:179] * Verifying Kubernetes components...
	I1227 10:18:00.575564 3763056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:18:00.607362 3763056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1227 10:18:00.611199 3763056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:18:00.611226 3763056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1227 10:18:00.611295 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:18:00.616002 3763056 addons.go:239] Setting addon default-storageclass=true in "embed-certs-161350"
	I1227 10:18:00.616045 3763056 host.go:66] Checking if "embed-certs-161350" exists ...
	I1227 10:18:00.616504 3763056 cli_runner.go:164] Run: docker container inspect embed-certs-161350 --format={{.State.Status}}
	I1227 10:18:00.651806 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
	I1227 10:18:00.652481 3763056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1227 10:18:00.652495 3763056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1227 10:18:00.652553 3763056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-161350
	I1227 10:18:00.678394 3763056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36255 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/embed-certs-161350/id_rsa Username:docker}
	I1227 10:18:00.925504 3763056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1227 10:18:00.990014 3763056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:18:01.008071 3763056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1227 10:18:01.022079 3763056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1227 10:18:01.642442 3763056 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1227 10:18:01.643520 3763056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-161350" to be "Ready" ...
	I1227 10:18:02.010986 3763056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.002824871s)
	I1227 10:18:02.025859 3763056 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1227 10:18:02.028996 3763056 addons.go:530] duration metric: took 1.461484918s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1227 10:18:02.150618 3763056 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-161350" context rescaled to 1 replicas
	W1227 10:18:03.648594 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
	W1227 10:18:06.147768 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
	W1227 10:18:08.647536 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
	W1227 10:18:10.648659 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
	W1227 10:18:13.147866 3763056 node_ready.go:57] node "embed-certs-161350" has "Ready":"False" status (will retry)
	I1227 10:18:13.648106 3763056 node_ready.go:49] node "embed-certs-161350" is "Ready"
	I1227 10:18:13.648134 3763056 node_ready.go:38] duration metric: took 12.003385326s for node "embed-certs-161350" to be "Ready" ...
	I1227 10:18:13.648147 3763056 api_server.go:52] waiting for apiserver process to appear ...
	I1227 10:18:13.648207 3763056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 10:18:13.674165 3763056 api_server.go:72] duration metric: took 13.107053095s to wait for apiserver process to appear ...
	I1227 10:18:13.674189 3763056 api_server.go:88] waiting for apiserver healthz status ...
	I1227 10:18:13.674209 3763056 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1227 10:18:13.683041 3763056 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1227 10:18:13.684226 3763056 api_server.go:141] control plane version: v1.35.0
	I1227 10:18:13.684250 3763056 api_server.go:131] duration metric: took 10.053774ms to wait for apiserver health ...
	I1227 10:18:13.684260 3763056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1227 10:18:13.687246 3763056 system_pods.go:59] 8 kube-system pods found
	I1227 10:18:13.687279 3763056 system_pods.go:61] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:18:13.687288 3763056 system_pods.go:61] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:18:13.687294 3763056 system_pods.go:61] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
	I1227 10:18:13.687299 3763056 system_pods.go:61] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
	I1227 10:18:13.687306 3763056 system_pods.go:61] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:18:13.687310 3763056 system_pods.go:61] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
	I1227 10:18:13.687315 3763056 system_pods.go:61] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
	I1227 10:18:13.687321 3763056 system_pods.go:61] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:18:13.687330 3763056 system_pods.go:74] duration metric: took 3.064327ms to wait for pod list to return data ...
	I1227 10:18:13.687338 3763056 default_sa.go:34] waiting for default service account to be created ...
	I1227 10:18:13.693672 3763056 default_sa.go:45] found service account: "default"
	I1227 10:18:13.693696 3763056 default_sa.go:55] duration metric: took 6.352245ms for default service account to be created ...
	I1227 10:18:13.693707 3763056 system_pods.go:116] waiting for k8s-apps to be running ...
	I1227 10:18:13.698133 3763056 system_pods.go:86] 8 kube-system pods found
	I1227 10:18:13.698171 3763056 system_pods.go:89] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:18:13.698180 3763056 system_pods.go:89] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:18:13.698187 3763056 system_pods.go:89] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
	I1227 10:18:13.698192 3763056 system_pods.go:89] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
	I1227 10:18:13.698200 3763056 system_pods.go:89] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:18:13.698205 3763056 system_pods.go:89] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
	I1227 10:18:13.698210 3763056 system_pods.go:89] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
	I1227 10:18:13.698216 3763056 system_pods.go:89] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:18:13.698246 3763056 retry.go:84] will retry after 200ms: missing components: kube-dns
	I1227 10:18:13.949331 3763056 system_pods.go:86] 8 kube-system pods found
	I1227 10:18:13.949424 3763056 system_pods.go:89] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:18:13.949449 3763056 system_pods.go:89] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:18:13.949499 3763056 system_pods.go:89] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
	I1227 10:18:13.949533 3763056 system_pods.go:89] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
	I1227 10:18:13.949558 3763056 system_pods.go:89] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:18:13.949580 3763056 system_pods.go:89] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
	I1227 10:18:13.949614 3763056 system_pods.go:89] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
	I1227 10:18:13.949659 3763056 system_pods.go:89] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:18:14.253262 3763056 system_pods.go:86] 8 kube-system pods found
	I1227 10:18:14.253300 3763056 system_pods.go:89] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1227 10:18:14.253310 3763056 system_pods.go:89] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:18:14.253317 3763056 system_pods.go:89] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
	I1227 10:18:14.253323 3763056 system_pods.go:89] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
	I1227 10:18:14.253330 3763056 system_pods.go:89] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:18:14.253336 3763056 system_pods.go:89] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
	I1227 10:18:14.253341 3763056 system_pods.go:89] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
	I1227 10:18:14.253348 3763056 system_pods.go:89] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1227 10:18:14.708556 3763056 system_pods.go:86] 8 kube-system pods found
	I1227 10:18:14.708592 3763056 system_pods.go:89] "coredns-7d764666f9-f6v7w" [9e651ba4-9299-42c8-b93f-516a86362ce1] Running
	I1227 10:18:14.708603 3763056 system_pods.go:89] "etcd-embed-certs-161350" [fa9dcdf6-3e6b-4294-a77d-8fa5ae43b623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1227 10:18:14.708639 3763056 system_pods.go:89] "kindnet-fl99p" [ea98417c-d5b8-415a-9dca-9ece8c30aa00] Running
	I1227 10:18:14.708653 3763056 system_pods.go:89] "kube-apiserver-embed-certs-161350" [fe5e3118-326d-4674-b3ef-6d032f9a1ca5] Running
	I1227 10:18:14.708662 3763056 system_pods.go:89] "kube-controller-manager-embed-certs-161350" [79d7ccc8-ba14-4ee2-8bf0-69d9e41716cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1227 10:18:14.708668 3763056 system_pods.go:89] "kube-proxy-snglb" [d8fe467c-cc82-4cb6-986a-5821dd6d1f20] Running
	I1227 10:18:14.708678 3763056 system_pods.go:89] "kube-scheduler-embed-certs-161350" [9421b569-5406-48b4-a90f-524f81094860] Running
	I1227 10:18:14.708683 3763056 system_pods.go:89] "storage-provisioner" [4d972dfe-cafb-4fd2-a4a7-d2fc43c2c24c] Running
	I1227 10:18:14.708708 3763056 system_pods.go:126] duration metric: took 1.014981567s to wait for k8s-apps to be running ...
	I1227 10:18:14.708721 3763056 system_svc.go:44] waiting for kubelet service to be running ....
	I1227 10:18:14.708791 3763056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:18:14.722076 3763056 system_svc.go:56] duration metric: took 13.345565ms WaitForService to wait for kubelet
	I1227 10:18:14.722107 3763056 kubeadm.go:587] duration metric: took 14.155000619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1227 10:18:14.722144 3763056 node_conditions.go:102] verifying NodePressure condition ...
	I1227 10:18:14.725242 3763056 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1227 10:18:14.725285 3763056 node_conditions.go:123] node cpu capacity is 2
	I1227 10:18:14.725300 3763056 node_conditions.go:105] duration metric: took 3.144817ms to run NodePressure ...
	I1227 10:18:14.725333 3763056 start.go:242] waiting for startup goroutines ...
	I1227 10:18:14.725346 3763056 start.go:247] waiting for cluster config update ...
	I1227 10:18:14.725358 3763056 start.go:256] writing updated cluster config ...
	I1227 10:18:14.725652 3763056 ssh_runner.go:195] Run: rm -f paused
	I1227 10:18:14.729430 3763056 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:18:14.733217 3763056 pod_ready.go:83] waiting for pod "coredns-7d764666f9-f6v7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:14.737752 3763056 pod_ready.go:94] pod "coredns-7d764666f9-f6v7w" is "Ready"
	I1227 10:18:14.737781 3763056 pod_ready.go:86] duration metric: took 4.534384ms for pod "coredns-7d764666f9-f6v7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:14.740043 3763056 pod_ready.go:83] waiting for pod "etcd-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:15.745895 3763056 pod_ready.go:94] pod "etcd-embed-certs-161350" is "Ready"
	I1227 10:18:15.745925 3763056 pod_ready.go:86] duration metric: took 1.005854401s for pod "etcd-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:15.748355 3763056 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:15.753044 3763056 pod_ready.go:94] pod "kube-apiserver-embed-certs-161350" is "Ready"
	I1227 10:18:15.753074 3763056 pod_ready.go:86] duration metric: took 4.692993ms for pod "kube-apiserver-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:15.755574 3763056 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:15.933981 3763056 pod_ready.go:94] pod "kube-controller-manager-embed-certs-161350" is "Ready"
	I1227 10:18:15.934010 3763056 pod_ready.go:86] duration metric: took 178.399759ms for pod "kube-controller-manager-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:16.134401 3763056 pod_ready.go:83] waiting for pod "kube-proxy-snglb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:16.533900 3763056 pod_ready.go:94] pod "kube-proxy-snglb" is "Ready"
	I1227 10:18:16.533926 3763056 pod_ready.go:86] duration metric: took 399.495422ms for pod "kube-proxy-snglb" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:16.734055 3763056 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:17.134038 3763056 pod_ready.go:94] pod "kube-scheduler-embed-certs-161350" is "Ready"
	I1227 10:18:17.134073 3763056 pod_ready.go:86] duration metric: took 399.980268ms for pod "kube-scheduler-embed-certs-161350" in "kube-system" namespace to be "Ready" or be gone ...
	I1227 10:18:17.134087 3763056 pod_ready.go:40] duration metric: took 2.404624993s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1227 10:18:17.189411 3763056 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I1227 10:18:17.192720 3763056 out.go:203] 
	W1227 10:18:17.195698 3763056 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I1227 10:18:17.198715 3763056 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I1227 10:18:17.202513 3763056 out.go:179] * Done! kubectl is now configured to use "embed-certs-161350" cluster and "default" namespace by default
	I1227 10:18:33.783343 3738115 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000245188s
	I1227 10:18:33.783607 3738115 kubeadm.go:319] 
	I1227 10:18:33.783674 3738115 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:18:33.783709 3738115 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:18:33.783814 3738115 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:18:33.783820 3738115 kubeadm.go:319] 
	I1227 10:18:33.783924 3738115 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:18:33.783956 3738115 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:18:33.783987 3738115 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:18:33.783992 3738115 kubeadm.go:319] 
	I1227 10:18:33.788224 3738115 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:18:33.788670 3738115 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:18:33.788796 3738115 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:18:33.789043 3738115 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:18:33.789054 3738115 kubeadm.go:319] 
	I1227 10:18:33.789122 3738115 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:18:33.789185 3738115 kubeadm.go:403] duration metric: took 8m6.169929895s to StartCluster
	I1227 10:18:33.789236 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:18:33.789303 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:18:33.814197 3738115 cri.go:96] found id: ""
	I1227 10:18:33.814236 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.814245 3738115 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:18:33.814252 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 10:18:33.814314 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:18:33.839019 3738115 cri.go:96] found id: ""
	I1227 10:18:33.839043 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.839051 3738115 logs.go:284] No container was found matching "etcd"
	I1227 10:18:33.839058 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 10:18:33.839114 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:18:33.876385 3738115 cri.go:96] found id: ""
	I1227 10:18:33.876414 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.876427 3738115 logs.go:284] No container was found matching "coredns"
	I1227 10:18:33.876433 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:18:33.876491 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:18:33.906761 3738115 cri.go:96] found id: ""
	I1227 10:18:33.906788 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.906797 3738115 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:18:33.906803 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:18:33.906864 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:18:33.935959 3738115 cri.go:96] found id: ""
	I1227 10:18:33.935985 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.935994 3738115 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:18:33.936000 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:18:33.936056 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:18:33.960107 3738115 cri.go:96] found id: ""
	I1227 10:18:33.960131 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.960143 3738115 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:18:33.960149 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 10:18:33.960236 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:18:33.989273 3738115 cri.go:96] found id: ""
	I1227 10:18:33.989300 3738115 logs.go:282] 0 containers: []
	W1227 10:18:33.989310 3738115 logs.go:284] No container was found matching "kindnet"
	I1227 10:18:33.989356 3738115 logs.go:123] Gathering logs for containerd ...
	I1227 10:18:33.989378 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 10:18:34.028316 3738115 logs.go:123] Gathering logs for container status ...
	I1227 10:18:34.028366 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 10:18:34.063676 3738115 logs.go:123] Gathering logs for kubelet ...
	I1227 10:18:34.063759 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:18:34.124368 3738115 logs.go:123] Gathering logs for dmesg ...
	I1227 10:18:34.124411 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:18:34.139149 3738115 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:18:34.139179 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:18:34.206064 3738115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:18:34.197603    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.198405    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.199906    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.200420    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.202145    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:18:34.197603    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.198405    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.199906    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.200420    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:34.202145    4861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1227 10:18:34.206090 3738115 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:18:34.206211 3738115 out.go:285] * 
	W1227 10:18:34.206276 3738115 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:18:34.206296 3738115 out.go:285] * 
	W1227 10:18:34.206569 3738115 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:18:34.211278 3738115 out.go:203] 
	W1227 10:18:34.214075 3738115 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000245188s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:18:34.214127 3738115 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:18:34.214153 3738115 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:18:34.217184 3738115 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834284373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834306880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834362879Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834389085Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834404781Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834421216Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834434811Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834447373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834468690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.834516156Z" level=info msg="Connect containerd service"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.835003407Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.836180697Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.855375575Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.855442552Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.855480869Z" level=info msg="Start subscribing containerd event"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.855530550Z" level=info msg="Start recovering state"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894697193Z" level=info msg="Start event monitor"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894747596Z" level=info msg="Start cni network conf syncer for default"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894758041Z" level=info msg="Start streaming server"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894768084Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894781663Z" level=info msg="runtime interface starting up..."
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894790336Z" level=info msg="starting plugins..."
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894806262Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 10:10:25 force-systemd-flag-027208 containerd[759]: time="2025-12-27T10:10:25.894993819Z" level=info msg="containerd successfully booted in 0.082110s"
	Dec 27 10:10:25 force-systemd-flag-027208 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:18:35.546559    4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:35.547192    4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:35.548864    4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:35.549437    4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:18:35.551145    4979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 09:24] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:18:35 up 16:01,  0 user,  load average: 3.10, 2.42, 2.26
	Linux force-systemd-flag-027208 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 10:18:32 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:18:33 force-systemd-flag-027208 kubelet[4774]: E1227 10:18:33.154055    4774 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:18:33 force-systemd-flag-027208 kubelet[4801]: E1227 10:18:33.924848    4801 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:18:33 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:18:34 force-systemd-flag-027208 kubelet[4873]: E1227 10:18:34.681089    4873 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:18:34 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:18:35 force-systemd-flag-027208 kubelet[4948]: E1227 10:18:35.423315    4948 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:18:35 force-systemd-flag-027208 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-027208 -n force-systemd-flag-027208
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-027208 -n force-systemd-flag-027208: exit status 6 (318.717771ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:18:35.983396 3767379 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-027208" does not appear in /home/jenkins/minikube-integration/22343-3531265/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-027208" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-027208" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-027208
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-027208: (1.969364461s)
--- FAIL: TestForceSystemdFlag (503.95s)

                                                
                                    
x
+
TestForceSystemdEnv (507.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-194624 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-194624 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m24.161736318s)

                                                
                                                
-- stdout --
	* [force-systemd-env-194624] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-194624" primary control-plane node in "force-systemd-env-194624" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:03:57.190248 3717561 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:03:57.190450 3717561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:03:57.190476 3717561 out.go:374] Setting ErrFile to fd 2...
	I1227 10:03:57.190495 3717561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:03:57.190807 3717561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 10:03:57.191286 3717561 out.go:368] Setting JSON to false
	I1227 10:03:57.192229 3717561 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":56790,"bootTime":1766773048,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 10:03:57.192321 3717561 start.go:143] virtualization:  
	I1227 10:03:57.195798 3717561 out.go:179] * [force-systemd-env-194624] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:03:57.199871 3717561 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:03:57.199962 3717561 notify.go:221] Checking for updates...
	I1227 10:03:57.204446 3717561 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:03:57.207568 3717561 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 10:03:57.210740 3717561 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 10:03:57.213567 3717561 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:03:57.216544 3717561 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1227 10:03:57.220020 3717561 config.go:182] Loaded profile config "test-preload-587482": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:03:57.220119 3717561 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:03:57.259711 3717561 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:03:57.259832 3717561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:03:57.345356 3717561 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-27 10:03:57.334985892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:03:57.345469 3717561 docker.go:319] overlay module found
	I1227 10:03:57.348821 3717561 out.go:179] * Using the docker driver based on user configuration
	I1227 10:03:57.351789 3717561 start.go:309] selected driver: docker
	I1227 10:03:57.351811 3717561 start.go:928] validating driver "docker" against <nil>
	I1227 10:03:57.351826 3717561 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:03:57.352548 3717561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:03:57.444166 3717561 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-27 10:03:57.433729714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:03:57.444326 3717561 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:03:57.444582 3717561 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 10:03:57.447643 3717561 out.go:179] * Using Docker driver with root privileges
	I1227 10:03:57.450537 3717561 cni.go:84] Creating CNI manager for ""
	I1227 10:03:57.450610 3717561 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:03:57.450620 3717561 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:03:57.450697 3717561 start.go:353] cluster config:
	{Name:force-systemd-env-194624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-194624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:03:57.453995 3717561 out.go:179] * Starting "force-systemd-env-194624" primary control-plane node in "force-systemd-env-194624" cluster
	I1227 10:03:57.456818 3717561 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 10:03:57.459535 3717561 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:03:57.462430 3717561 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:03:57.462482 3717561 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 10:03:57.462493 3717561 cache.go:65] Caching tarball of preloaded images
	I1227 10:03:57.462519 3717561 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:03:57.462582 3717561 preload.go:251] Found /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 10:03:57.462592 3717561 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 10:03:57.462703 3717561 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/config.json ...
	I1227 10:03:57.462720 3717561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/config.json: {Name:mka9964611fe98d633d976be7095fcc1519e9704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:03:57.484424 3717561 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:03:57.484451 3717561 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:03:57.484469 3717561 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:03:57.484503 3717561 start.go:360] acquireMachinesLock for force-systemd-env-194624: {Name:mk7f926397b1164bf08869f70c66f46b23b815d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:03:57.484608 3717561 start.go:364] duration metric: took 87.777µs to acquireMachinesLock for "force-systemd-env-194624"
	I1227 10:03:57.484641 3717561 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-194624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-194624 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 10:03:57.484719 3717561 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:03:57.488853 3717561 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:03:57.489363 3717561 start.go:159] libmachine.API.Create for "force-systemd-env-194624" (driver="docker")
	I1227 10:03:57.489897 3717561 client.go:173] LocalClient.Create starting
	I1227 10:03:57.489966 3717561 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem
	I1227 10:03:57.490002 3717561 main.go:144] libmachine: Decoding PEM data...
	I1227 10:03:57.490022 3717561 main.go:144] libmachine: Parsing certificate...
	I1227 10:03:57.490077 3717561 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem
	I1227 10:03:57.490098 3717561 main.go:144] libmachine: Decoding PEM data...
	I1227 10:03:57.490109 3717561 main.go:144] libmachine: Parsing certificate...
	I1227 10:03:57.490488 3717561 cli_runner.go:164] Run: docker network inspect force-systemd-env-194624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:03:57.530126 3717561 cli_runner.go:211] docker network inspect force-systemd-env-194624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:03:57.530213 3717561 network_create.go:284] running [docker network inspect force-systemd-env-194624] to gather additional debugging logs...
	I1227 10:03:57.530234 3717561 cli_runner.go:164] Run: docker network inspect force-systemd-env-194624
	W1227 10:03:57.549330 3717561 cli_runner.go:211] docker network inspect force-systemd-env-194624 returned with exit code 1
	I1227 10:03:57.549368 3717561 network_create.go:287] error running [docker network inspect force-systemd-env-194624]: docker network inspect force-systemd-env-194624: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-194624 not found
	I1227 10:03:57.549399 3717561 network_create.go:289] output of [docker network inspect force-systemd-env-194624]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-194624 not found
	
	** /stderr **
	I1227 10:03:57.549531 3717561 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:03:57.593512 3717561 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d8712ba8a9f7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9e:f2:5a:61:6a:4e} reservation:<nil>}
	I1227 10:03:57.593876 3717561 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-43ae11d059eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:6d:b0:96:78:2a} reservation:<nil>}
	I1227 10:03:57.594203 3717561 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c4bd1426b4b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:5d:63:1e:36:ed} reservation:<nil>}
	I1227 10:03:57.594657 3717561 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1d420}
	I1227 10:03:57.594681 3717561 network_create.go:124] attempt to create docker network force-systemd-env-194624 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 10:03:57.594749 3717561 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-194624 force-systemd-env-194624
	I1227 10:03:57.684276 3717561 network_create.go:108] docker network force-systemd-env-194624 192.168.76.0/24 created
	I1227 10:03:57.684308 3717561 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-194624" container
	I1227 10:03:57.684383 3717561 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:03:57.702121 3717561 cli_runner.go:164] Run: docker volume create force-systemd-env-194624 --label name.minikube.sigs.k8s.io=force-systemd-env-194624 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:03:57.722805 3717561 oci.go:103] Successfully created a docker volume force-systemd-env-194624
	I1227 10:03:57.722903 3717561 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-194624-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-194624 --entrypoint /usr/bin/test -v force-systemd-env-194624:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:03:58.460184 3717561 oci.go:107] Successfully prepared a docker volume force-systemd-env-194624
	I1227 10:03:58.460249 3717561 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:03:58.460260 3717561 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:03:58.460327 3717561 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-194624:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:04:04.592586 3717561 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-194624:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (6.132211743s)
	I1227 10:04:04.592615 3717561 kic.go:203] duration metric: took 6.132351916s to extract preloaded images to volume ...
	W1227 10:04:04.592750 3717561 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:04:04.592868 3717561 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:04:04.693171 3717561 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-194624 --name force-systemd-env-194624 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-194624 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-194624 --network force-systemd-env-194624 --ip 192.168.76.2 --volume force-systemd-env-194624:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:04:05.028826 3717561 cli_runner.go:164] Run: docker container inspect force-systemd-env-194624 --format={{.State.Running}}
	I1227 10:04:05.065969 3717561 cli_runner.go:164] Run: docker container inspect force-systemd-env-194624 --format={{.State.Status}}
	I1227 10:04:05.095434 3717561 cli_runner.go:164] Run: docker exec force-systemd-env-194624 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:04:05.159199 3717561 oci.go:144] the created container "force-systemd-env-194624" has a running status.
	I1227 10:04:05.159234 3717561 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-env-194624/id_rsa...
	I1227 10:04:05.722230 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-env-194624/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 10:04:05.722284 3717561 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-env-194624/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:04:05.743553 3717561 cli_runner.go:164] Run: docker container inspect force-systemd-env-194624 --format={{.State.Status}}
	I1227 10:04:05.762636 3717561 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:04:05.762657 3717561 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-194624 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:04:05.856239 3717561 cli_runner.go:164] Run: docker container inspect force-systemd-env-194624 --format={{.State.Status}}
	I1227 10:04:05.903189 3717561 machine.go:94] provisionDockerMachine start ...
	I1227 10:04:05.903287 3717561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-194624
	I1227 10:04:05.951185 3717561 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:05.951518 3717561 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36195 <nil> <nil>}
	I1227 10:04:05.951527 3717561 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:04:05.952179 3717561 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37850->127.0.0.1:36195: read: connection reset by peer
	I1227 10:04:09.099470 3717561 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-194624
	
	I1227 10:04:09.099495 3717561 ubuntu.go:182] provisioning hostname "force-systemd-env-194624"
	I1227 10:04:09.099570 3717561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-194624
	I1227 10:04:09.122399 3717561 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:09.122718 3717561 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36195 <nil> <nil>}
	I1227 10:04:09.122729 3717561 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-194624 && echo "force-systemd-env-194624" | sudo tee /etc/hostname
	I1227 10:04:09.273891 3717561 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-194624
	
	I1227 10:04:09.273976 3717561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-194624
	I1227 10:04:09.297919 3717561 main.go:144] libmachine: Using SSH client type: native
	I1227 10:04:09.298239 3717561 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36195 <nil> <nil>}
	I1227 10:04:09.298261 3717561 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-194624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-194624/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-194624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:04:09.451876 3717561 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:04:09.451955 3717561 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-3531265/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-3531265/.minikube}
	I1227 10:04:09.451991 3717561 ubuntu.go:190] setting up certificates
	I1227 10:04:09.452029 3717561 provision.go:84] configureAuth start
	I1227 10:04:09.452123 3717561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-194624
	I1227 10:04:09.470894 3717561 provision.go:143] copyHostCerts
	I1227 10:04:09.471094 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
	I1227 10:04:09.471139 3717561 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem, removing ...
	I1227 10:04:09.471146 3717561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
	I1227 10:04:09.471232 3717561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem (1082 bytes)
	I1227 10:04:09.471311 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
	I1227 10:04:09.471328 3717561 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem, removing ...
	I1227 10:04:09.471332 3717561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
	I1227 10:04:09.471358 3717561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem (1123 bytes)
	I1227 10:04:09.471395 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
	I1227 10:04:09.471409 3717561 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem, removing ...
	I1227 10:04:09.471413 3717561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
	I1227 10:04:09.471435 3717561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem (1675 bytes)
	I1227 10:04:09.471478 3717561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-194624 san=[127.0.0.1 192.168.76.2 force-systemd-env-194624 localhost minikube]
	I1227 10:04:09.885647 3717561 provision.go:177] copyRemoteCerts
	I1227 10:04:09.885769 3717561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:04:09.885847 3717561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-194624
	I1227 10:04:09.903913 3717561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36195 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-env-194624/id_rsa Username:docker}
	I1227 10:04:10.010617 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 10:04:10.010698 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:04:10.036109 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 10:04:10.036201 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 10:04:10.059626 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 10:04:10.059697 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1227 10:04:10.089498 3717561 provision.go:87] duration metric: took 637.43742ms to configureAuth
	I1227 10:04:10.089525 3717561 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:04:10.089733 3717561 config.go:182] Loaded profile config "force-systemd-env-194624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:04:10.089742 3717561 machine.go:97] duration metric: took 4.186535694s to provisionDockerMachine
	I1227 10:04:10.089750 3717561 client.go:176] duration metric: took 12.599839131s to LocalClient.Create
	I1227 10:04:10.089775 3717561 start.go:167] duration metric: took 12.600414151s to libmachine.API.Create "force-systemd-env-194624"
	I1227 10:04:10.089784 3717561 start.go:293] postStartSetup for "force-systemd-env-194624" (driver="docker")
	I1227 10:04:10.089793 3717561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:04:10.089847 3717561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:04:10.089894 3717561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-194624
	I1227 10:04:10.124735 3717561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36195 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-env-194624/id_rsa Username:docker}
	I1227 10:04:10.228162 3717561 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:04:10.232289 3717561 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:04:10.232363 3717561 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:04:10.232389 3717561 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/addons for local assets ...
	I1227 10:04:10.232481 3717561 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/files for local assets ...
	I1227 10:04:10.232594 3717561 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> 35331472.pem in /etc/ssl/certs
	I1227 10:04:10.232620 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> /etc/ssl/certs/35331472.pem
	I1227 10:04:10.232766 3717561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:04:10.241669 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /etc/ssl/certs/35331472.pem (1708 bytes)
	I1227 10:04:10.262842 3717561 start.go:296] duration metric: took 173.026615ms for postStartSetup
	I1227 10:04:10.263288 3717561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-194624
	I1227 10:04:10.296607 3717561 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/config.json ...
	I1227 10:04:10.296884 3717561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:04:10.296933 3717561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-194624
	I1227 10:04:10.314102 3717561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36195 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-env-194624/id_rsa Username:docker}
	I1227 10:04:10.412489 3717561 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:04:10.417931 3717561 start.go:128] duration metric: took 12.933198038s to createHost
	I1227 10:04:10.417954 3717561 start.go:83] releasing machines lock for "force-systemd-env-194624", held for 12.93333127s
	I1227 10:04:10.418024 3717561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-194624
	I1227 10:04:10.435571 3717561 ssh_runner.go:195] Run: cat /version.json
	I1227 10:04:10.435627 3717561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-194624
	I1227 10:04:10.435876 3717561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:04:10.435938 3717561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-194624
	I1227 10:04:10.468432 3717561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36195 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-env-194624/id_rsa Username:docker}
	I1227 10:04:10.478676 3717561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36195 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-env-194624/id_rsa Username:docker}
	I1227 10:04:10.588156 3717561 ssh_runner.go:195] Run: systemctl --version
	I1227 10:04:10.693226 3717561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:04:10.698425 3717561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:04:10.698551 3717561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:04:10.730017 3717561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:04:10.730050 3717561 start.go:496] detecting cgroup driver to use...
	I1227 10:04:10.730070 3717561 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 10:04:10.730128 3717561 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 10:04:10.748331 3717561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 10:04:10.763807 3717561 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:04:10.763880 3717561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:04:10.782286 3717561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:04:10.801388 3717561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:04:10.919772 3717561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:04:11.052843 3717561 docker.go:234] disabling docker service ...
	I1227 10:04:11.052985 3717561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:04:11.080505 3717561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:04:11.095558 3717561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:04:11.304206 3717561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:04:11.492866 3717561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:04:11.513057 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:04:11.544431 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 10:04:11.556208 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 10:04:11.572617 3717561 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 10:04:11.572687 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 10:04:11.583754 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 10:04:11.594110 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 10:04:11.606254 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 10:04:11.616722 3717561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:04:11.632972 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 10:04:11.645720 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 10:04:11.661745 3717561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 10:04:11.671674 3717561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:04:11.691123 3717561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:04:11.707018 3717561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:04:11.921132 3717561 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 10:04:12.102164 3717561 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 10:04:12.102236 3717561 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 10:04:12.108041 3717561 start.go:574] Will wait 60s for crictl version
	I1227 10:04:12.108111 3717561 ssh_runner.go:195] Run: which crictl
	I1227 10:04:12.113976 3717561 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:04:12.176253 3717561 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 10:04:12.176377 3717561 ssh_runner.go:195] Run: containerd --version
	I1227 10:04:12.213531 3717561 ssh_runner.go:195] Run: containerd --version
	I1227 10:04:12.243973 3717561 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 10:04:12.246904 3717561 cli_runner.go:164] Run: docker network inspect force-systemd-env-194624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:04:12.268403 3717561 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 10:04:12.276789 3717561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:04:12.299016 3717561 kubeadm.go:884] updating cluster {Name:force-systemd-env-194624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-194624 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:04:12.299144 3717561 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:04:12.299222 3717561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:04:12.356484 3717561 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 10:04:12.356512 3717561 containerd.go:542] Images already preloaded, skipping extraction
	I1227 10:04:12.356577 3717561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:04:12.399123 3717561 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 10:04:12.399147 3717561 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:04:12.399154 3717561 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I1227 10:04:12.399262 3717561 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-194624 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-194624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:04:12.399337 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 10:04:12.460561 3717561 cni.go:84] Creating CNI manager for ""
	I1227 10:04:12.460587 3717561 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:04:12.460610 3717561 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:04:12.460635 3717561 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-194624 NodeName:force-systemd-env-194624 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:04:12.460759 3717561 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-194624"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:04:12.460835 3717561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:04:12.473330 3717561 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:04:12.473407 3717561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:04:12.487628 3717561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1227 10:04:12.512157 3717561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:04:12.532110 3717561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I1227 10:04:12.557402 3717561 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:04:12.561187 3717561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:04:12.577158 3717561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:04:12.759662 3717561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:04:12.788198 3717561 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624 for IP: 192.168.76.2
	I1227 10:04:12.788219 3717561 certs.go:195] generating shared ca certs ...
	I1227 10:04:12.788236 3717561 certs.go:227] acquiring lock for ca certs: {Name:mk8b517b50583c7fd9315f1419472c192d2e7a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:12.788385 3717561 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key
	I1227 10:04:12.788450 3717561 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key
	I1227 10:04:12.788463 3717561 certs.go:257] generating profile certs ...
	I1227 10:04:12.788519 3717561 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/client.key
	I1227 10:04:12.788544 3717561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/client.crt with IP's: []
	I1227 10:04:13.128498 3717561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/client.crt ...
	I1227 10:04:13.128533 3717561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/client.crt: {Name:mkd0c970c8e650bbe18496f6abd1330f5879e392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:13.128731 3717561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/client.key ...
	I1227 10:04:13.128749 3717561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/client.key: {Name:mk241010c4a871836734eb8f5e4a323e62892518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:13.128837 3717561 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.key.342f9805
	I1227 10:04:13.128862 3717561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.crt.342f9805 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 10:04:13.345183 3717561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.crt.342f9805 ...
	I1227 10:04:13.345217 3717561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.crt.342f9805: {Name:mk5b41f2ec1ba062cd3d165d32754a8fe50a0b3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:13.346123 3717561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.key.342f9805 ...
	I1227 10:04:13.346149 3717561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.key.342f9805: {Name:mk0ba6c1cce4e3f5f676a89791f5c40dde20ff14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:13.346285 3717561 certs.go:382] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.crt.342f9805 -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.crt
	I1227 10:04:13.346386 3717561 certs.go:386] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.key.342f9805 -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.key
	I1227 10:04:13.346472 3717561 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.key
	I1227 10:04:13.346493 3717561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.crt with IP's: []
	I1227 10:04:13.464310 3717561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.crt ...
	I1227 10:04:13.464342 3717561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.crt: {Name:mka7c0c7282eec458df36aa749274f53f11909b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:13.464559 3717561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.key ...
	I1227 10:04:13.464578 3717561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.key: {Name:mkd280d24af3eb562d3a98b2ba78fa974134746a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:04:13.464679 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 10:04:13.464703 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 10:04:13.464717 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 10:04:13.464734 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 10:04:13.464746 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 10:04:13.464762 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 10:04:13.464773 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 10:04:13.464790 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 10:04:13.464839 3717561 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem (1338 bytes)
	W1227 10:04:13.464887 3717561 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147_empty.pem, impossibly tiny 0 bytes
	I1227 10:04:13.464901 3717561 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:04:13.464928 3717561 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:04:13.464957 3717561 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:04:13.464986 3717561 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem (1675 bytes)
	I1227 10:04:13.465033 3717561 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem (1708 bytes)
	I1227 10:04:13.465069 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:13.465087 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem -> /usr/share/ca-certificates/3533147.pem
	I1227 10:04:13.465098 3717561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> /usr/share/ca-certificates/35331472.pem
	I1227 10:04:13.465611 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:04:13.483352 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:04:13.501117 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:04:13.521964 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:04:13.557137 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 10:04:13.581201 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 10:04:13.626001 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:04:13.669099 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-env-194624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:04:13.719522 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:04:13.753002 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem --> /usr/share/ca-certificates/3533147.pem (1338 bytes)
	I1227 10:04:13.773421 3717561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /usr/share/ca-certificates/35331472.pem (1708 bytes)
	I1227 10:04:13.810160 3717561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:04:13.825616 3717561 ssh_runner.go:195] Run: openssl version
	I1227 10:04:13.835079 3717561 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:13.842827 3717561 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:04:13.853742 3717561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:13.864511 3717561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:25 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:13.864585 3717561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:04:13.928068 3717561 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:04:13.940226 3717561 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:04:13.952261 3717561 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3533147.pem
	I1227 10:04:13.964783 3717561 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3533147.pem /etc/ssl/certs/3533147.pem
	I1227 10:04:13.975578 3717561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3533147.pem
	I1227 10:04:13.979277 3717561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:31 /usr/share/ca-certificates/3533147.pem
	I1227 10:04:13.979348 3717561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3533147.pem
	I1227 10:04:14.030452 3717561 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:04:14.038603 3717561 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3533147.pem /etc/ssl/certs/51391683.0
	I1227 10:04:14.046202 3717561 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/35331472.pem
	I1227 10:04:14.054010 3717561 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/35331472.pem /etc/ssl/certs/35331472.pem
	I1227 10:04:14.063066 3717561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35331472.pem
	I1227 10:04:14.067851 3717561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:31 /usr/share/ca-certificates/35331472.pem
	I1227 10:04:14.067920 3717561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35331472.pem
	I1227 10:04:14.114765 3717561 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:04:14.128854 3717561 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/35331472.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:04:14.141755 3717561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:04:14.147746 3717561 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:04:14.147804 3717561 kubeadm.go:401] StartCluster: {Name:force-systemd-env-194624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-194624 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:04:14.147884 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 10:04:14.147946 3717561 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:04:14.199949 3717561 cri.go:96] found id: ""
	I1227 10:04:14.200022 3717561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:04:14.211227 3717561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:04:14.226509 3717561 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:04:14.226579 3717561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:04:14.243887 3717561 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:04:14.243908 3717561 kubeadm.go:158] found existing configuration files:
	
	I1227 10:04:14.243965 3717561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:04:14.253676 3717561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:04:14.253783 3717561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:04:14.263530 3717561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:04:14.274219 3717561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:04:14.274286 3717561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:04:14.282928 3717561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:04:14.291267 3717561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:04:14.291340 3717561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:04:14.298643 3717561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:04:14.307255 3717561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:04:14.307321 3717561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:04:14.314892 3717561 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:04:14.375935 3717561 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:04:14.375997 3717561 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:04:14.517419 3717561 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:04:14.517509 3717561 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:04:14.517547 3717561 kubeadm.go:319] OS: Linux
	I1227 10:04:14.517600 3717561 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:04:14.517652 3717561 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:04:14.517702 3717561 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:04:14.517755 3717561 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:04:14.517807 3717561 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:04:14.517858 3717561 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:04:14.517910 3717561 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:04:14.517962 3717561 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:04:14.518012 3717561 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:04:14.599919 3717561 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:04:14.600048 3717561 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:04:14.600160 3717561 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:04:14.607358 3717561 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:04:14.612598 3717561 out.go:252]   - Generating certificates and keys ...
	I1227 10:04:14.612694 3717561 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:04:14.612767 3717561 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:04:14.998439 3717561 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:04:15.301325 3717561 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:04:15.340942 3717561 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:04:15.785314 3717561 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:04:16.109618 3717561 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:04:16.110001 3717561 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-194624 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:04:16.361364 3717561 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:04:16.361732 3717561 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-194624 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 10:04:16.582035 3717561 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:04:16.851334 3717561 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:04:16.999628 3717561 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:04:17.000036 3717561 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:04:17.206364 3717561 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:04:17.385477 3717561 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:04:18.018108 3717561 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:04:18.536763 3717561 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:04:18.790406 3717561 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:04:18.791394 3717561 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:04:18.794244 3717561 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:04:18.797890 3717561 out.go:252]   - Booting up control plane ...
	I1227 10:04:18.798019 3717561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:04:18.798131 3717561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:04:18.798820 3717561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:04:18.822235 3717561 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:04:18.822352 3717561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:04:18.831303 3717561 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:04:18.831733 3717561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:04:18.832508 3717561 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:04:18.992889 3717561 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:04:18.993012 3717561 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:08:18.993751 3717561 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001078882s
	I1227 10:08:18.993782 3717561 kubeadm.go:319] 
	I1227 10:08:18.993843 3717561 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:08:18.993876 3717561 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:08:18.993988 3717561 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:08:18.994000 3717561 kubeadm.go:319] 
	I1227 10:08:18.994115 3717561 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:08:18.994153 3717561 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:08:18.994186 3717561 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:08:18.994193 3717561 kubeadm.go:319] 
	I1227 10:08:18.999064 3717561 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:08:18.999504 3717561 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:08:18.999619 3717561 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:08:18.999857 3717561 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:08:18.999867 3717561 kubeadm.go:319] 
	I1227 10:08:18.999936 3717561 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:08:19.000074 3717561 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-194624 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-194624 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001078882s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-194624 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-194624 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001078882s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:08:19.000162 3717561 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1227 10:08:19.426722 3717561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:08:19.440676 3717561 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:08:19.440743 3717561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:08:19.448930 3717561 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:08:19.448953 3717561 kubeadm.go:158] found existing configuration files:
	
	I1227 10:08:19.449008 3717561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:08:19.456896 3717561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:08:19.456967 3717561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:08:19.464598 3717561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:08:19.472657 3717561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:08:19.472725 3717561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:08:19.480730 3717561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:08:19.488719 3717561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:08:19.488786 3717561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:08:19.496470 3717561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:08:19.504436 3717561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:08:19.504506 3717561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:08:19.512449 3717561 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:08:19.560608 3717561 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:08:19.560672 3717561 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:08:19.636329 3717561 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:08:19.636408 3717561 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:08:19.636448 3717561 kubeadm.go:319] OS: Linux
	I1227 10:08:19.636497 3717561 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:08:19.636549 3717561 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:08:19.636600 3717561 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:08:19.636652 3717561 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:08:19.636704 3717561 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:08:19.636757 3717561 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:08:19.636806 3717561 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:08:19.636857 3717561 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:08:19.636907 3717561 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:08:19.712190 3717561 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:08:19.712304 3717561 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:08:19.712400 3717561 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:08:19.719447 3717561 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:08:19.725083 3717561 out.go:252]   - Generating certificates and keys ...
	I1227 10:08:19.725193 3717561 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:08:19.725281 3717561 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:08:19.725373 3717561 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:08:19.725451 3717561 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:08:19.725552 3717561 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:08:19.725623 3717561 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:08:19.725702 3717561 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:08:19.725777 3717561 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:08:19.725881 3717561 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:08:19.725966 3717561 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:08:19.726019 3717561 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:08:19.726089 3717561 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:08:19.900874 3717561 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:08:20.142022 3717561 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:08:20.264240 3717561 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:08:20.324762 3717561 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:08:20.623022 3717561 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:08:20.623746 3717561 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:08:20.626363 3717561 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:08:20.629622 3717561 out.go:252]   - Booting up control plane ...
	I1227 10:08:20.629755 3717561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:08:20.629850 3717561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:08:20.629923 3717561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:08:20.651050 3717561 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:08:20.651173 3717561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:08:20.659617 3717561 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:08:20.663791 3717561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:08:20.664052 3717561 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:08:20.807811 3717561 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:08:20.807939 3717561 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:12:20.808075 3717561 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000282189s
	I1227 10:12:20.808112 3717561 kubeadm.go:319] 
	I1227 10:12:20.808198 3717561 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:12:20.808253 3717561 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:12:20.808380 3717561 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:12:20.808389 3717561 kubeadm.go:319] 
	I1227 10:12:20.808513 3717561 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:12:20.808558 3717561 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:12:20.808590 3717561 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:12:20.808595 3717561 kubeadm.go:319] 
	I1227 10:12:20.813848 3717561 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:12:20.814330 3717561 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:12:20.814451 3717561 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:12:20.814721 3717561 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:12:20.814743 3717561 kubeadm.go:319] 
	I1227 10:12:20.814815 3717561 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:12:20.814885 3717561 kubeadm.go:403] duration metric: took 8m6.667085127s to StartCluster
	I1227 10:12:20.814932 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:12:20.815018 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:12:20.840078 3717561 cri.go:96] found id: ""
	I1227 10:12:20.840113 3717561 logs.go:282] 0 containers: []
	W1227 10:12:20.840122 3717561 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:12:20.840129 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 10:12:20.840188 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:12:20.885786 3717561 cri.go:96] found id: ""
	I1227 10:12:20.885814 3717561 logs.go:282] 0 containers: []
	W1227 10:12:20.885823 3717561 logs.go:284] No container was found matching "etcd"
	I1227 10:12:20.885829 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 10:12:20.885895 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:12:20.958282 3717561 cri.go:96] found id: ""
	I1227 10:12:20.958307 3717561 logs.go:282] 0 containers: []
	W1227 10:12:20.958316 3717561 logs.go:284] No container was found matching "coredns"
	I1227 10:12:20.958323 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:12:20.958382 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:12:20.983543 3717561 cri.go:96] found id: ""
	I1227 10:12:20.983568 3717561 logs.go:282] 0 containers: []
	W1227 10:12:20.983577 3717561 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:12:20.983583 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:12:20.983660 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:12:21.009164 3717561 cri.go:96] found id: ""
	I1227 10:12:21.009191 3717561 logs.go:282] 0 containers: []
	W1227 10:12:21.009208 3717561 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:12:21.009215 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:12:21.009298 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:12:21.035983 3717561 cri.go:96] found id: ""
	I1227 10:12:21.036068 3717561 logs.go:282] 0 containers: []
	W1227 10:12:21.036092 3717561 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:12:21.036116 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 10:12:21.036201 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:12:21.062002 3717561 cri.go:96] found id: ""
	I1227 10:12:21.062024 3717561 logs.go:282] 0 containers: []
	W1227 10:12:21.062032 3717561 logs.go:284] No container was found matching "kindnet"
	I1227 10:12:21.062043 3717561 logs.go:123] Gathering logs for kubelet ...
	I1227 10:12:21.062055 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:12:21.120013 3717561 logs.go:123] Gathering logs for dmesg ...
	I1227 10:12:21.120051 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:12:21.135388 3717561 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:12:21.135416 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:12:21.203963 3717561 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:12:21.196052    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.196781    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.198310    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.198854    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.200018    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:12:21.196052    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.196781    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.198310    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.198854    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.200018    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 10:12:21.203987 3717561 logs.go:123] Gathering logs for containerd ...
	I1227 10:12:21.204000 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 10:12:21.243349 3717561 logs.go:123] Gathering logs for container status ...
	I1227 10:12:21.243385 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 10:12:21.272276 3717561 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:12:21.272328 3717561 out.go:285] * 
	* 
	W1227 10:12:21.272388 3717561 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:12:21.272405 3717561 out.go:285] * 
	* 
	W1227 10:12:21.272654 3717561 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:12:21.279213 3717561 out.go:203] 
	W1227 10:12:21.282884 3717561 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:12:21.282998 3717561 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:12:21.283026 3717561 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:12:21.286058 3717561 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-194624 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-194624 ssh "cat /etc/containerd/config.toml"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-27 10:12:21.627133934 +0000 UTC m=+2829.418285862
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-194624
helpers_test.go:244: (dbg) docker inspect force-systemd-env-194624:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "479a2fdf340ae0b9cc46abd52cf988134afbafd4dfd9ca682808671c64366836",
	        "Created": "2025-12-27T10:04:04.710899688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3718300,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T10:04:04.780198391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/479a2fdf340ae0b9cc46abd52cf988134afbafd4dfd9ca682808671c64366836/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/479a2fdf340ae0b9cc46abd52cf988134afbafd4dfd9ca682808671c64366836/hostname",
	        "HostsPath": "/var/lib/docker/containers/479a2fdf340ae0b9cc46abd52cf988134afbafd4dfd9ca682808671c64366836/hosts",
	        "LogPath": "/var/lib/docker/containers/479a2fdf340ae0b9cc46abd52cf988134afbafd4dfd9ca682808671c64366836/479a2fdf340ae0b9cc46abd52cf988134afbafd4dfd9ca682808671c64366836-json.log",
	        "Name": "/force-systemd-env-194624",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-194624:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-194624",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "479a2fdf340ae0b9cc46abd52cf988134afbafd4dfd9ca682808671c64366836",
	                "LowerDir": "/var/lib/docker/overlay2/4dd1fb2c1e6d9242d640bda7514dcf91cac72b352fa12224eabb8e809aaae3af-init/diff:/var/lib/docker/overlay2/2db3190b649abc62a8f6b3256c95cbe4767892923c34d4bdea0f0debaf7248d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4dd1fb2c1e6d9242d640bda7514dcf91cac72b352fa12224eabb8e809aaae3af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4dd1fb2c1e6d9242d640bda7514dcf91cac72b352fa12224eabb8e809aaae3af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4dd1fb2c1e6d9242d640bda7514dcf91cac72b352fa12224eabb8e809aaae3af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-194624",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-194624/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-194624",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-194624",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-194624",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "902dca64c8397aa7550a70cc7e6f75021fe6f396f0faa096c79d1ed0047be5c1",
	            "SandboxKey": "/var/run/docker/netns/902dca64c839",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36195"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36196"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36199"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36197"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36198"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-194624": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:18:58:a8:3a:30",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a07a37a22614c338f1cee5badd4649096d4038be4257153abb0f87a5439fb453",
	                    "EndpointID": "c024eaffb23c2dd4384baefc89f89987acad4da4ed2e71673395d1187d1184cf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-194624",
	                        "479a2fdf340a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-194624 -n force-systemd-env-194624
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-194624 -n force-systemd-env-194624: exit status 6 (333.70887ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:12:21.972203 3741846 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-194624" does not appear in /home/jenkins/minikube-integration/22343-3531265/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-194624 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-557039 sudo cat /var/lib/kubelet/config.yaml                                                                            │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo systemctl status docker --all --full --no-pager                                                             │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo systemctl cat docker --no-pager                                                                             │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo cat /etc/docker/daemon.json                                                                                 │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo docker system info                                                                                          │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo systemctl status cri-docker --all --full --no-pager                                                         │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo systemctl cat cri-docker --no-pager                                                                         │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                    │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo cat /usr/lib/systemd/system/cri-docker.service                                                              │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo cri-dockerd --version                                                                                       │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo systemctl status containerd --all --full --no-pager                                                         │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo systemctl cat containerd --no-pager                                                                         │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo cat /lib/systemd/system/containerd.service                                                                  │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo cat /etc/containerd/config.toml                                                                             │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo containerd config dump                                                                                      │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo systemctl status crio --all --full --no-pager                                                               │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo systemctl cat crio --no-pager                                                                               │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                     │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ ssh     │ -p cilium-557039 sudo crio config                                                                                                 │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │                     │
	│ delete  │ -p cilium-557039                                                                                                                  │ cilium-557039             │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:06 UTC │
	│ start   │ -p cert-expiration-435404 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                      │ cert-expiration-435404    │ jenkins │ v1.37.0 │ 27 Dec 25 10:06 UTC │ 27 Dec 25 10:07 UTC │
	│ start   │ -p cert-expiration-435404 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                   │ cert-expiration-435404    │ jenkins │ v1.37.0 │ 27 Dec 25 10:10 UTC │ 27 Dec 25 10:10 UTC │
	│ delete  │ -p cert-expiration-435404                                                                                                         │ cert-expiration-435404    │ jenkins │ v1.37.0 │ 27 Dec 25 10:10 UTC │ 27 Dec 25 10:10 UTC │
	│ start   │ -p force-systemd-flag-027208 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-flag-027208 │ jenkins │ v1.37.0 │ 27 Dec 25 10:10 UTC │                     │
	│ ssh     │ force-systemd-env-194624 ssh cat /etc/containerd/config.toml                                                                      │ force-systemd-env-194624  │ jenkins │ v1.37.0 │ 27 Dec 25 10:12 UTC │ 27 Dec 25 10:12 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 10:10:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 10:10:14.060682 3738115 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:10:14.060840 3738115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:10:14.060853 3738115 out.go:374] Setting ErrFile to fd 2...
	I1227 10:10:14.060859 3738115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:10:14.061129 3738115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 10:10:14.061557 3738115 out.go:368] Setting JSON to false
	I1227 10:10:14.062452 3738115 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":57166,"bootTime":1766773048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 10:10:14.062522 3738115 start.go:143] virtualization:  
	I1227 10:10:14.066189 3738115 out.go:179] * [force-systemd-flag-027208] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:10:14.070968 3738115 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:10:14.071126 3738115 notify.go:221] Checking for updates...
	I1227 10:10:14.077634 3738115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:10:14.080928 3738115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 10:10:14.084146 3738115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 10:10:14.087414 3738115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:10:14.090571 3738115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:10:14.094274 3738115 config.go:182] Loaded profile config "force-systemd-env-194624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:10:14.094431 3738115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:10:14.131713 3738115 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:10:14.131835 3738115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:10:14.222716 3738115 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:10:14.212351353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:10:14.222833 3738115 docker.go:319] overlay module found
	I1227 10:10:14.226201 3738115 out.go:179] * Using the docker driver based on user configuration
	I1227 10:10:14.229183 3738115 start.go:309] selected driver: docker
	I1227 10:10:14.229209 3738115 start.go:928] validating driver "docker" against <nil>
	I1227 10:10:14.229223 3738115 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:10:14.229983 3738115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:10:14.283479 3738115 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:10:14.273728372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:10:14.283631 3738115 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 10:10:14.283847 3738115 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 10:10:14.286995 3738115 out.go:179] * Using Docker driver with root privileges
	I1227 10:10:14.290011 3738115 cni.go:84] Creating CNI manager for ""
	I1227 10:10:14.290080 3738115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:10:14.290097 3738115 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 10:10:14.290178 3738115 start.go:353] cluster config:
	{Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I1227 10:10:14.293396 3738115 out.go:179] * Starting "force-systemd-flag-027208" primary control-plane node in "force-systemd-flag-027208" cluster
	I1227 10:10:14.296262 3738115 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 10:10:14.299201 3738115 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 10:10:14.302027 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:10:14.302080 3738115 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I1227 10:10:14.302089 3738115 cache.go:65] Caching tarball of preloaded images
	I1227 10:10:14.302190 3738115 preload.go:251] Found /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 10:10:14.302205 3738115 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I1227 10:10:14.302312 3738115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json ...
	I1227 10:10:14.302339 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json: {Name:mk8e499633705fb35f3a63ac14b480b9b5477cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:14.302514 3738115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 10:10:14.324411 3738115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 10:10:14.324434 3738115 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 10:10:14.324451 3738115 cache.go:243] Successfully downloaded all kic artifacts
	I1227 10:10:14.324490 3738115 start.go:360] acquireMachinesLock for force-systemd-flag-027208: {Name:mk408a0d777415c6b3bf75190db8aa17e71bedcf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 10:10:14.324601 3738115 start.go:364] duration metric: took 89.656µs to acquireMachinesLock for "force-systemd-flag-027208"
	I1227 10:10:14.324631 3738115 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1227 10:10:14.324705 3738115 start.go:125] createHost starting for "" (driver="docker")
	I1227 10:10:14.328143 3738115 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 10:10:14.328386 3738115 start.go:159] libmachine.API.Create for "force-systemd-flag-027208" (driver="docker")
	I1227 10:10:14.328425 3738115 client.go:173] LocalClient.Create starting
	I1227 10:10:14.328500 3738115 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem
	I1227 10:10:14.328539 3738115 main.go:144] libmachine: Decoding PEM data...
	I1227 10:10:14.328557 3738115 main.go:144] libmachine: Parsing certificate...
	I1227 10:10:14.328611 3738115 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem
	I1227 10:10:14.328633 3738115 main.go:144] libmachine: Decoding PEM data...
	I1227 10:10:14.328646 3738115 main.go:144] libmachine: Parsing certificate...
	I1227 10:10:14.329018 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 10:10:14.345559 3738115 cli_runner.go:211] docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 10:10:14.345658 3738115 network_create.go:284] running [docker network inspect force-systemd-flag-027208] to gather additional debugging logs...
	I1227 10:10:14.345680 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208
	W1227 10:10:14.361855 3738115 cli_runner.go:211] docker network inspect force-systemd-flag-027208 returned with exit code 1
	I1227 10:10:14.361884 3738115 network_create.go:287] error running [docker network inspect force-systemd-flag-027208]: docker network inspect force-systemd-flag-027208: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-027208 not found
	I1227 10:10:14.361897 3738115 network_create.go:289] output of [docker network inspect force-systemd-flag-027208]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-027208 not found
	
	** /stderr **
	I1227 10:10:14.362011 3738115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:10:14.379980 3738115 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d8712ba8a9f7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9e:f2:5a:61:6a:4e} reservation:<nil>}
	I1227 10:10:14.380333 3738115 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-43ae11d059eb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6e:6d:b0:96:78:2a} reservation:<nil>}
	I1227 10:10:14.380708 3738115 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c4bd1426b4b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:5d:63:1e:36:ed} reservation:<nil>}
	I1227 10:10:14.380950 3738115 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a07a37a22614 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:a6:04:fd:b9:e2:9a} reservation:<nil>}
	I1227 10:10:14.381366 3738115 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d1ce0}
	I1227 10:10:14.381389 3738115 network_create.go:124] attempt to create docker network force-systemd-flag-027208 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 10:10:14.381445 3738115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-027208 force-systemd-flag-027208
	I1227 10:10:14.441506 3738115 network_create.go:108] docker network force-systemd-flag-027208 192.168.85.0/24 created
	I1227 10:10:14.441539 3738115 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-027208" container
	I1227 10:10:14.441612 3738115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 10:10:14.457713 3738115 cli_runner.go:164] Run: docker volume create force-systemd-flag-027208 --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --label created_by.minikube.sigs.k8s.io=true
	I1227 10:10:14.476328 3738115 oci.go:103] Successfully created a docker volume force-systemd-flag-027208
	I1227 10:10:14.476443 3738115 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-027208-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --entrypoint /usr/bin/test -v force-systemd-flag-027208:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 10:10:15.042844 3738115 oci.go:107] Successfully prepared a docker volume force-systemd-flag-027208
	I1227 10:10:15.042916 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:10:15.042928 3738115 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 10:10:15.043044 3738115 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-027208:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 10:10:18.934663 3738115 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-027208:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (3.891575702s)
	I1227 10:10:18.934700 3738115 kic.go:203] duration metric: took 3.891766533s to extract preloaded images to volume ...
	W1227 10:10:18.934838 3738115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 10:10:18.934972 3738115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 10:10:18.984807 3738115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-027208 --name force-systemd-flag-027208 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-027208 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-027208 --network force-systemd-flag-027208 --ip 192.168.85.2 --volume force-systemd-flag-027208:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 10:10:19.288318 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Running}}
	I1227 10:10:19.312460 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
	I1227 10:10:19.332923 3738115 cli_runner.go:164] Run: docker exec force-systemd-flag-027208 stat /var/lib/dpkg/alternatives/iptables
	I1227 10:10:19.398079 3738115 oci.go:144] the created container "force-systemd-flag-027208" has a running status.
	I1227 10:10:19.398134 3738115 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa...
	I1227 10:10:19.979164 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 10:10:19.979299 3738115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 10:10:19.999194 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
	I1227 10:10:20.030475 3738115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 10:10:20.030501 3738115 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-027208 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 10:10:20.074535 3738115 cli_runner.go:164] Run: docker container inspect force-systemd-flag-027208 --format={{.State.Status}}
	I1227 10:10:20.093820 3738115 machine.go:94] provisionDockerMachine start ...
	I1227 10:10:20.093949 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:20.121792 3738115 main.go:144] libmachine: Using SSH client type: native
	I1227 10:10:20.122155 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36225 <nil> <nil>}
	I1227 10:10:20.122171 3738115 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 10:10:20.122773 3738115 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51694->127.0.0.1:36225: read: connection reset by peer
	I1227 10:10:23.267068 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-027208
	
	I1227 10:10:23.267094 3738115 ubuntu.go:182] provisioning hostname "force-systemd-flag-027208"
	I1227 10:10:23.267161 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:23.286197 3738115 main.go:144] libmachine: Using SSH client type: native
	I1227 10:10:23.286515 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36225 <nil> <nil>}
	I1227 10:10:23.286534 3738115 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-027208 && echo "force-systemd-flag-027208" | sudo tee /etc/hostname
	I1227 10:10:23.437194 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-027208
	
	I1227 10:10:23.437279 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:23.456503 3738115 main.go:144] libmachine: Using SSH client type: native
	I1227 10:10:23.456885 3738115 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 36225 <nil> <nil>}
	I1227 10:10:23.456913 3738115 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-027208' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-027208/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-027208' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 10:10:23.595282 3738115 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 10:10:23.595307 3738115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-3531265/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-3531265/.minikube}
	I1227 10:10:23.595327 3738115 ubuntu.go:190] setting up certificates
	I1227 10:10:23.595336 3738115 provision.go:84] configureAuth start
	I1227 10:10:23.595398 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
	I1227 10:10:23.612849 3738115 provision.go:143] copyHostCerts
	I1227 10:10:23.612896 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
	I1227 10:10:23.612928 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem, removing ...
	I1227 10:10:23.612938 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem
	I1227 10:10:23.613020 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.pem (1082 bytes)
	I1227 10:10:23.613112 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
	I1227 10:10:23.613137 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem, removing ...
	I1227 10:10:23.613147 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem
	I1227 10:10:23.613184 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/cert.pem (1123 bytes)
	I1227 10:10:23.613236 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
	I1227 10:10:23.613270 3738115 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem, removing ...
	I1227 10:10:23.613277 3738115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem
	I1227 10:10:23.613304 3738115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-3531265/.minikube/key.pem (1675 bytes)
	I1227 10:10:23.613366 3738115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-027208 san=[127.0.0.1 192.168.85.2 force-systemd-flag-027208 localhost minikube]
	I1227 10:10:24.133708 3738115 provision.go:177] copyRemoteCerts
	I1227 10:10:24.133787 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 10:10:24.133831 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.151314 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.250894 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 10:10:24.250995 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1227 10:10:24.269969 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 10:10:24.270032 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 10:10:24.289161 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 10:10:24.289239 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 10:10:24.306849 3738115 provision.go:87] duration metric: took 711.49982ms to configureAuth
	I1227 10:10:24.306875 3738115 ubuntu.go:206] setting minikube options for container-runtime
	I1227 10:10:24.307072 3738115 config.go:182] Loaded profile config "force-systemd-flag-027208": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:10:24.307083 3738115 machine.go:97] duration metric: took 4.213237619s to provisionDockerMachine
	I1227 10:10:24.307090 3738115 client.go:176] duration metric: took 9.978658918s to LocalClient.Create
	I1227 10:10:24.307107 3738115 start.go:167] duration metric: took 9.978722333s to libmachine.API.Create "force-systemd-flag-027208"
	I1227 10:10:24.307114 3738115 start.go:293] postStartSetup for "force-systemd-flag-027208" (driver="docker")
	I1227 10:10:24.307122 3738115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 10:10:24.307178 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 10:10:24.307230 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.324192 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.423140 3738115 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 10:10:24.426587 3738115 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 10:10:24.426659 3738115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 10:10:24.426678 3738115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/addons for local assets ...
	I1227 10:10:24.426739 3738115 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-3531265/.minikube/files for local assets ...
	I1227 10:10:24.426819 3738115 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> 35331472.pem in /etc/ssl/certs
	I1227 10:10:24.426834 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> /etc/ssl/certs/35331472.pem
	I1227 10:10:24.426951 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 10:10:24.434338 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /etc/ssl/certs/35331472.pem (1708 bytes)
	I1227 10:10:24.452382 3738115 start.go:296] duration metric: took 145.254802ms for postStartSetup
	I1227 10:10:24.452762 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
	I1227 10:10:24.469668 3738115 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/config.json ...
	I1227 10:10:24.469957 3738115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 10:10:24.470000 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.486890 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.584309 3738115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 10:10:24.589316 3738115 start.go:128] duration metric: took 10.264593752s to createHost
	I1227 10:10:24.589389 3738115 start.go:83] releasing machines lock for "force-systemd-flag-027208", held for 10.264769864s
	I1227 10:10:24.589479 3738115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-027208
	I1227 10:10:24.607151 3738115 ssh_runner.go:195] Run: cat /version.json
	I1227 10:10:24.607216 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.607537 3738115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 10:10:24.607594 3738115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-027208
	I1227 10:10:24.647065 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.656060 3738115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36225 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/force-systemd-flag-027208/id_rsa Username:docker}
	I1227 10:10:24.852332 3738115 ssh_runner.go:195] Run: systemctl --version
	I1227 10:10:24.859289 3738115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 10:10:24.863820 3738115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 10:10:24.863935 3738115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 10:10:24.894008 3738115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 10:10:24.894085 3738115 start.go:496] detecting cgroup driver to use...
	I1227 10:10:24.894113 3738115 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 10:10:24.894199 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1227 10:10:24.909955 3738115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 10:10:24.924610 3738115 docker.go:218] disabling cri-docker service (if available) ...
	I1227 10:10:24.924679 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1227 10:10:24.943027 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1227 10:10:24.962924 3738115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1227 10:10:25.086519 3738115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1227 10:10:25.217234 3738115 docker.go:234] disabling docker service ...
	I1227 10:10:25.217301 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1227 10:10:25.239443 3738115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1227 10:10:25.253469 3738115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1227 10:10:25.372805 3738115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1227 10:10:25.502827 3738115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 10:10:25.516102 3738115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 10:10:25.530490 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 10:10:25.539633 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 10:10:25.548981 3738115 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 10:10:25.549107 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 10:10:25.558292 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 10:10:25.567719 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 10:10:25.576955 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 10:10:25.586514 3738115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 10:10:25.594864 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 10:10:25.604220 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 10:10:25.613067 3738115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 10:10:25.621797 3738115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 10:10:25.629270 3738115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 10:10:25.637053 3738115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:10:25.760495 3738115 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 10:10:25.897831 3738115 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I1227 10:10:25.897957 3738115 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1227 10:10:25.901900 3738115 start.go:574] Will wait 60s for crictl version
	I1227 10:10:25.902037 3738115 ssh_runner.go:195] Run: which crictl
	I1227 10:10:25.905697 3738115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 10:10:25.930207 3738115 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I1227 10:10:25.930328 3738115 ssh_runner.go:195] Run: containerd --version
	I1227 10:10:25.954007 3738115 ssh_runner.go:195] Run: containerd --version
	I1227 10:10:25.981733 3738115 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I1227 10:10:25.984781 3738115 cli_runner.go:164] Run: docker network inspect force-systemd-flag-027208 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 10:10:26.000934 3738115 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 10:10:26.006285 3738115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:10:26.018144 3738115 kubeadm.go:884] updating cluster {Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 10:10:26.018261 3738115 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I1227 10:10:26.018337 3738115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:10:26.050904 3738115 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 10:10:26.050932 3738115 containerd.go:542] Images already preloaded, skipping extraction
	I1227 10:10:26.051019 3738115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1227 10:10:26.077679 3738115 containerd.go:635] all images are preloaded for containerd runtime.
	I1227 10:10:26.077700 3738115 cache_images.go:86] Images are preloaded, skipping loading
	I1227 10:10:26.077708 3738115 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I1227 10:10:26.077812 3738115 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-027208 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 10:10:26.077878 3738115 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1227 10:10:26.103476 3738115 cni.go:84] Creating CNI manager for ""
	I1227 10:10:26.103506 3738115 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 10:10:26.103527 3738115 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 10:10:26.103551 3738115 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-027208 NodeName:force-systemd-flag-027208 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 10:10:26.103669 3738115 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-027208"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 10:10:26.103747 3738115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 10:10:26.115900 3738115 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 10:10:26.115969 3738115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 10:10:26.124889 3738115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1227 10:10:26.139449 3738115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 10:10:26.154050 3738115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1227 10:10:26.169297 3738115 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 10:10:26.173915 3738115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 10:10:26.184920 3738115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 10:10:26.302987 3738115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 10:10:26.319342 3738115 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208 for IP: 192.168.85.2
	I1227 10:10:26.319367 3738115 certs.go:195] generating shared ca certs ...
	I1227 10:10:26.319382 3738115 certs.go:227] acquiring lock for ca certs: {Name:mk8b517b50583c7fd9315f1419472c192d2e7a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.319519 3738115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key
	I1227 10:10:26.319566 3738115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key
	I1227 10:10:26.319577 3738115 certs.go:257] generating profile certs ...
	I1227 10:10:26.319635 3738115 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key
	I1227 10:10:26.319659 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt with IP's: []
	I1227 10:10:26.459451 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt ...
	I1227 10:10:26.459481 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.crt: {Name:mk84501b4c3d27859a09c7a6cf2970a871461396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.459678 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key ...
	I1227 10:10:26.459696 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/client.key: {Name:mk2ccf9cd6593ffe591c5f10566441231d2db314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.459797 3738115 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b
	I1227 10:10:26.459816 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 10:10:26.619632 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b ...
	I1227 10:10:26.619671 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b: {Name:mk45edfe96d665c299603d64f2aab60b1ce255c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.619859 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b ...
	I1227 10:10:26.619874 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b: {Name:mkbd7ed3b29ae956b5f18bf81df861e3ebc9c0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:26.619963 3738115 certs.go:382] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt.e1bde68b -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt
	I1227 10:10:26.620069 3738115 certs.go:386] copying /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key.e1bde68b -> /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key
	I1227 10:10:26.620138 3738115 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key
	I1227 10:10:26.620158 3738115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt with IP's: []
	I1227 10:10:27.146672 3738115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt ...
	I1227 10:10:27.146707 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt: {Name:mkb638601bcc294803da88d5fdf89e5d664c6575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:27.146874 3738115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key ...
	I1227 10:10:27.146889 3738115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key: {Name:mk1275117485033a42422350e6b97f277389ec3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 10:10:27.146996 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 10:10:27.147022 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 10:10:27.147035 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 10:10:27.147053 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 10:10:27.147065 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 10:10:27.147081 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 10:10:27.147094 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 10:10:27.147106 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 10:10:27.147167 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem (1338 bytes)
	W1227 10:10:27.147209 3738115 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147_empty.pem, impossibly tiny 0 bytes
	I1227 10:10:27.147220 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca-key.pem (1675 bytes)
	I1227 10:10:27.147257 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/ca.pem (1082 bytes)
	I1227 10:10:27.147286 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/cert.pem (1123 bytes)
	I1227 10:10:27.147309 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/key.pem (1675 bytes)
	I1227 10:10:27.147356 3738115 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem (1708 bytes)
	I1227 10:10:27.147392 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.147415 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem -> /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.147433 3738115 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem -> /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.147968 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 10:10:27.172281 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 10:10:27.199732 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 10:10:27.218091 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 10:10:27.236726 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 10:10:27.255815 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 10:10:27.273210 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 10:10:27.291337 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/force-systemd-flag-027208/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 10:10:27.309854 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 10:10:27.327812 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/certs/3533147.pem --> /usr/share/ca-certificates/3533147.pem (1338 bytes)
	I1227 10:10:27.345068 3738115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/ssl/certs/35331472.pem --> /usr/share/ca-certificates/35331472.pem (1708 bytes)
	I1227 10:10:27.363093 3738115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 10:10:27.376243 3738115 ssh_runner.go:195] Run: openssl version
	I1227 10:10:27.382542 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.390045 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 10:10:27.397770 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.401584 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:25 /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.401753 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 10:10:27.442821 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 10:10:27.450555 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 10:10:27.458392 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.465875 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3533147.pem /etc/ssl/certs/3533147.pem
	I1227 10:10:27.473778 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.477818 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:31 /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.477901 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3533147.pem
	I1227 10:10:27.521479 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 10:10:27.529246 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3533147.pem /etc/ssl/certs/51391683.0
	I1227 10:10:27.537210 3738115 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.545185 3738115 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/35331472.pem /etc/ssl/certs/35331472.pem
	I1227 10:10:27.553062 3738115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.557000 3738115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:31 /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.557069 3738115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35331472.pem
	I1227 10:10:27.598533 3738115 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 10:10:27.606100 3738115 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/35331472.pem /etc/ssl/certs/3ec20f2e.0
	I1227 10:10:27.614561 3738115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 10:10:27.619167 3738115 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 10:10:27.619261 3738115 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-027208 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-027208 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 10:10:27.619373 3738115 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1227 10:10:27.619454 3738115 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1227 10:10:27.664708 3738115 cri.go:96] found id: ""
	I1227 10:10:27.664808 3738115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 10:10:27.676293 3738115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 10:10:27.684601 3738115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:10:27.684711 3738115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:10:27.693229 3738115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:10:27.693252 3738115 kubeadm.go:158] found existing configuration files:
	
	I1227 10:10:27.693326 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:10:27.701375 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:10:27.701465 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:10:27.709152 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:10:27.717622 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:10:27.717691 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:10:27.725649 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:10:27.733875 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:10:27.733981 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:10:27.741583 3738115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:10:27.749413 3738115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:10:27.749491 3738115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:10:27.757332 3738115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:10:27.795629 3738115 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:10:27.795779 3738115 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:10:27.898250 3738115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:10:27.898344 3738115 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:10:27.898392 3738115 kubeadm.go:319] OS: Linux
	I1227 10:10:27.898440 3738115 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:10:27.898492 3738115 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:10:27.898543 3738115 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:10:27.898594 3738115 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:10:27.898647 3738115 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:10:27.898703 3738115 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:10:27.898753 3738115 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:10:27.898801 3738115 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:10:27.898850 3738115 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:10:27.969995 3738115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:10:27.970212 3738115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:10:27.970343 3738115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:10:27.975838 3738115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:10:27.982447 3738115 out.go:252]   - Generating certificates and keys ...
	I1227 10:10:27.982636 3738115 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:10:27.982764 3738115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:10:28.179272 3738115 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 10:10:28.301146 3738115 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 10:10:28.409704 3738115 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 10:10:28.575840 3738115 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 10:10:28.653265 3738115 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 10:10:28.653619 3738115 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:10:29.172495 3738115 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 10:10:29.173136 3738115 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-027208 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 10:10:29.225627 3738115 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 10:10:29.920042 3738115 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 10:10:30.152507 3738115 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 10:10:30.153337 3738115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:10:30.333897 3738115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:10:30.680029 3738115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:10:30.828481 3738115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:10:30.943020 3738115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:10:31.110010 3738115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:10:31.110883 3738115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:10:31.114899 3738115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:10:31.121179 3738115 out.go:252]   - Booting up control plane ...
	I1227 10:10:31.121296 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:10:31.121382 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:10:31.121448 3738115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:10:31.138571 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:10:31.139005 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:10:31.146921 3738115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:10:31.147313 3738115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:10:31.147361 3738115 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:10:31.282879 3738115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:10:31.283057 3738115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:12:20.808075 3717561 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000282189s
	I1227 10:12:20.808112 3717561 kubeadm.go:319] 
	I1227 10:12:20.808198 3717561 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:12:20.808253 3717561 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:12:20.808380 3717561 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:12:20.808389 3717561 kubeadm.go:319] 
	I1227 10:12:20.808513 3717561 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:12:20.808558 3717561 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:12:20.808590 3717561 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:12:20.808595 3717561 kubeadm.go:319] 
	I1227 10:12:20.813848 3717561 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:12:20.814330 3717561 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:12:20.814451 3717561 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:12:20.814721 3717561 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:12:20.814743 3717561 kubeadm.go:319] 
	I1227 10:12:20.814815 3717561 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:12:20.814885 3717561 kubeadm.go:403] duration metric: took 8m6.667085127s to StartCluster
	I1227 10:12:20.814932 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:12:20.815018 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:12:20.840078 3717561 cri.go:96] found id: ""
	I1227 10:12:20.840113 3717561 logs.go:282] 0 containers: []
	W1227 10:12:20.840122 3717561 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:12:20.840129 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1227 10:12:20.840188 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:12:20.885786 3717561 cri.go:96] found id: ""
	I1227 10:12:20.885814 3717561 logs.go:282] 0 containers: []
	W1227 10:12:20.885823 3717561 logs.go:284] No container was found matching "etcd"
	I1227 10:12:20.885829 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1227 10:12:20.885895 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:12:20.958282 3717561 cri.go:96] found id: ""
	I1227 10:12:20.958307 3717561 logs.go:282] 0 containers: []
	W1227 10:12:20.958316 3717561 logs.go:284] No container was found matching "coredns"
	I1227 10:12:20.958323 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:12:20.958382 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:12:20.983543 3717561 cri.go:96] found id: ""
	I1227 10:12:20.983568 3717561 logs.go:282] 0 containers: []
	W1227 10:12:20.983577 3717561 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:12:20.983583 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:12:20.983660 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:12:21.009164 3717561 cri.go:96] found id: ""
	I1227 10:12:21.009191 3717561 logs.go:282] 0 containers: []
	W1227 10:12:21.009208 3717561 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:12:21.009215 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:12:21.009298 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:12:21.035983 3717561 cri.go:96] found id: ""
	I1227 10:12:21.036068 3717561 logs.go:282] 0 containers: []
	W1227 10:12:21.036092 3717561 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:12:21.036116 3717561 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1227 10:12:21.036201 3717561 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:12:21.062002 3717561 cri.go:96] found id: ""
	I1227 10:12:21.062024 3717561 logs.go:282] 0 containers: []
	W1227 10:12:21.062032 3717561 logs.go:284] No container was found matching "kindnet"
	I1227 10:12:21.062043 3717561 logs.go:123] Gathering logs for kubelet ...
	I1227 10:12:21.062055 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:12:21.120013 3717561 logs.go:123] Gathering logs for dmesg ...
	I1227 10:12:21.120051 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:12:21.135388 3717561 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:12:21.135416 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:12:21.203963 3717561 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:12:21.196052    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.196781    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.198310    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.198854    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.200018    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:12:21.196052    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.196781    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.198310    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.198854    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:21.200018    4877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 10:12:21.203987 3717561 logs.go:123] Gathering logs for containerd ...
	I1227 10:12:21.204000 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1227 10:12:21.243349 3717561 logs.go:123] Gathering logs for container status ...
	I1227 10:12:21.243385 3717561 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 10:12:21.272276 3717561 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:12:21.272328 3717561 out.go:285] * 
	W1227 10:12:21.272388 3717561 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:12:21.272405 3717561 out.go:285] * 
	W1227 10:12:21.272654 3717561 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:12:21.279213 3717561 out.go:203] 
	W1227 10:12:21.282884 3717561 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000282189s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:12:21.282998 3717561 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:12:21.283026 3717561 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:12:21.286058 3717561 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013309165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013323753Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013377413Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013399723Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013410200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013422385Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013432124Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013443266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013464837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013506289Z" level=info msg="Connect containerd service"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.013861006Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.014499672Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.033006116Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.033075858Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.033106881Z" level=info msg="Start subscribing containerd event"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.033156209Z" level=info msg="Start recovering state"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.098767157Z" level=info msg="Start event monitor"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.098830844Z" level=info msg="Start cni network conf syncer for default"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.098841100Z" level=info msg="Start streaming server"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.098852899Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.098863123Z" level=info msg="runtime interface starting up..."
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.098871024Z" level=info msg="starting plugins..."
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.099092501Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 27 10:04:12 force-systemd-env-194624 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 27 10:04:12 force-systemd-env-194624 containerd[759]: time="2025-12-27T10:04:12.104500961Z" level=info msg="containerd successfully booted in 0.117451s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:12:22.641594    5009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:22.642277    5009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:22.643933    5009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:22.644352    5009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:12:22.645868    5009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec27 09:24] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:12:22 up 15:54,  0 user,  load average: 0.96, 1.40, 2.06
	Linux force-systemd-env-194624 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 10:12:19 force-systemd-env-194624 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:12:20 force-systemd-env-194624 kubelet[4803]: E1227 10:12:20.171457    4803 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:12:20 force-systemd-env-194624 kubelet[4822]: E1227 10:12:20.934766    4822 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:12:20 force-systemd-env-194624 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:12:21 force-systemd-env-194624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 10:12:21 force-systemd-env-194624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:12:21 force-systemd-env-194624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:12:21 force-systemd-env-194624 kubelet[4903]: E1227 10:12:21.718322    4903 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:12:21 force-systemd-env-194624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:12:21 force-systemd-env-194624 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:12:22 force-systemd-env-194624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 10:12:22 force-systemd-env-194624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:12:22 force-systemd-env-194624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:12:22 force-systemd-env-194624 kubelet[4962]: E1227 10:12:22.435581    4962 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:12:22 force-systemd-env-194624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:12:22 force-systemd-env-194624 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-194624 -n force-systemd-env-194624
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-194624 -n force-systemd-env-194624: exit status 6 (331.137515ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:12:23.080108 3742071 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-194624" does not appear in /home/jenkins/minikube-integration/22343-3531265/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-194624" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-194624" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-194624
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-194624: (1.983009907s)
--- FAIL: TestForceSystemdEnv (507.95s)

                                                
                                    

Test pass (305/337)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.06
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 4.31
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.16
18 TestDownloadOnly/v1.35.0/DeleteAll 0.34
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 123.95
29 TestAddons/serial/Volcano 39.63
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.86
35 TestAddons/parallel/Registry 16.81
36 TestAddons/parallel/RegistryCreds 0.78
37 TestAddons/parallel/Ingress 18.96
38 TestAddons/parallel/InspektorGadget 11.06
39 TestAddons/parallel/MetricsServer 6.82
41 TestAddons/parallel/CSI 47.15
42 TestAddons/parallel/Headlamp 17.94
43 TestAddons/parallel/CloudSpanner 6.62
44 TestAddons/parallel/LocalPath 52.64
45 TestAddons/parallel/NvidiaDevicePlugin 6.11
46 TestAddons/parallel/Yakd 11.92
48 TestAddons/StoppedEnableDisable 12.36
49 TestCertOptions 30.16
50 TestCertExpiration 216.23
54 TestDockerEnvContainerd 42.6
58 TestErrorSpam/setup 25.83
59 TestErrorSpam/start 0.82
60 TestErrorSpam/status 1.22
61 TestErrorSpam/pause 1.63
62 TestErrorSpam/unpause 2.06
63 TestErrorSpam/stop 1.65
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 45.45
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.15
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.58
75 TestFunctional/serial/CacheCmd/cache/add_local 1.24
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.86
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 46.96
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.5
86 TestFunctional/serial/LogsFileCmd 1.57
87 TestFunctional/serial/InvalidService 4.67
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 8.97
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.12
97 TestFunctional/parallel/ServiceCmdConnect 8.6
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 22.05
101 TestFunctional/parallel/SSHCmd 0.58
102 TestFunctional/parallel/CpCmd 2.06
104 TestFunctional/parallel/FileSync 0.39
105 TestFunctional/parallel/CertSync 2.19
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
113 TestFunctional/parallel/License 0.38
114 TestFunctional/parallel/Version/short 0.09
115 TestFunctional/parallel/Version/components 1.28
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
121 TestFunctional/parallel/ImageCommands/Setup 0.72
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.49
127 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.35
138 TestFunctional/parallel/ServiceCmd/List 0.35
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.6
141 TestFunctional/parallel/ServiceCmd/Format 0.4
142 TestFunctional/parallel/ServiceCmd/URL 0.4
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
150 TestFunctional/parallel/ProfileCmd/profile_list 0.52
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
152 TestFunctional/parallel/MountCmd/any-port 8.58
153 TestFunctional/parallel/MountCmd/specific-port 2.27
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.85
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 179.35
163 TestMultiControlPlane/serial/DeployApp 7.28
164 TestMultiControlPlane/serial/PingHostFromPods 1.63
165 TestMultiControlPlane/serial/AddWorkerNode 30.15
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
168 TestMultiControlPlane/serial/CopyFile 20.03
169 TestMultiControlPlane/serial/StopSecondaryNode 13.01
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.62
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.16
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.86
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.73
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.56
177 TestMultiControlPlane/serial/RestartCluster 60.05
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 46.09
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.16
185 TestJSONOutput/start/Command 47.04
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.71
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.64
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.01
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 34.75
211 TestKicCustomNetwork/use_default_bridge_network 30.78
212 TestKicExistingNetwork 28.99
213 TestKicCustomSubnet 31.5
214 TestKicStaticIP 31.73
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 65.18
219 TestMountStart/serial/StartWithMountFirst 8.7
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.76
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 8.22
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 74.5
231 TestMultiNode/serial/DeployApp2Nodes 5.5
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 28.57
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.55
237 TestMultiNode/serial/StopNode 2.39
238 TestMultiNode/serial/StartAfterStop 7.76
239 TestMultiNode/serial/RestartKeepsNodes 78.84
240 TestMultiNode/serial/DeleteNode 5.75
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 48.03
243 TestMultiNode/serial/ValidateNameConflict 30.31
250 TestScheduledStopUnix 103.45
253 TestInsufficientStorage 12.19
254 TestRunningBinaryUpgrade 70.99
256 TestKubernetesUpgrade 334.88
257 TestMissingContainerUpgrade 139.67
259 TestPause/serial/Start 50.84
260 TestPause/serial/SecondStartNoReconfiguration 7.42
261 TestPause/serial/Pause 1.18
262 TestPause/serial/VerifyStatus 0.46
263 TestPause/serial/Unpause 1.07
264 TestPause/serial/PauseAgain 1.27
265 TestPause/serial/DeletePaused 3.57
266 TestPause/serial/VerifyDeletedResources 0.82
267 TestStoppedBinaryUpgrade/Setup 0.76
268 TestStoppedBinaryUpgrade/Upgrade 311.1
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
277 TestPreload/Start-NoPreload-PullImage 68.03
278 TestPreload/Restart-With-Preload-Check-User-Image 47.12
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
282 TestNoKubernetes/serial/StartWithK8s 26.46
283 TestNoKubernetes/serial/StartWithStopK8s 7.38
284 TestNoKubernetes/serial/Start 4.94
285 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
287 TestNoKubernetes/serial/ProfileList 1.04
288 TestNoKubernetes/serial/Stop 1.29
289 TestNoKubernetes/serial/StartNoArgs 6.54
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
298 TestNetworkPlugins/group/false 3.58
303 TestStartStop/group/old-k8s-version/serial/FirstStart 61.12
304 TestStartStop/group/old-k8s-version/serial/DeployApp 8.38
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.25
306 TestStartStop/group/old-k8s-version/serial/Stop 12.08
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/old-k8s-version/serial/SecondStart 27.79
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 9.01
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.41
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
312 TestStartStop/group/old-k8s-version/serial/Pause 3.31
314 TestStartStop/group/no-preload/serial/FirstStart 51.54
315 TestStartStop/group/no-preload/serial/DeployApp 9.33
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
317 TestStartStop/group/no-preload/serial/Stop 12.11
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
319 TestStartStop/group/no-preload/serial/SecondStart 51.69
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
323 TestStartStop/group/no-preload/serial/Pause 3.08
325 TestStartStop/group/embed-certs/serial/FirstStart 46.93
326 TestStartStop/group/embed-certs/serial/DeployApp 9.34
327 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
328 TestStartStop/group/embed-certs/serial/Stop 12.47
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.82
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
332 TestStartStop/group/embed-certs/serial/SecondStart 28.23
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
336 TestStartStop/group/embed-certs/serial/Pause 3.17
338 TestStartStop/group/newest-cni/serial/FirstStart 33.86
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.5
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.36
341 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.31
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 60.86
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.91
346 TestStartStop/group/newest-cni/serial/Stop 1.56
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
348 TestStartStop/group/newest-cni/serial/SecondStart 18.3
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
352 TestStartStop/group/newest-cni/serial/Pause 2.99
353 TestPreload/PreloadSrc/gcs 4.25
354 TestPreload/PreloadSrc/github 4.65
355 TestPreload/PreloadSrc/gcs-cached 0.46
356 TestNetworkPlugins/group/auto/Start 47.14
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.02
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.46
361 TestNetworkPlugins/group/kindnet/Start 48.19
362 TestNetworkPlugins/group/auto/KubeletFlags 0.4
363 TestNetworkPlugins/group/auto/NetCatPod 10.35
364 TestNetworkPlugins/group/auto/DNS 0.25
365 TestNetworkPlugins/group/auto/Localhost 0.17
366 TestNetworkPlugins/group/auto/HairPin 0.19
367 TestNetworkPlugins/group/calico/Start 58.84
368 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
369 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
370 TestNetworkPlugins/group/kindnet/NetCatPod 9.4
371 TestNetworkPlugins/group/kindnet/DNS 0.2
372 TestNetworkPlugins/group/kindnet/Localhost 0.17
373 TestNetworkPlugins/group/kindnet/HairPin 0.21
374 TestNetworkPlugins/group/custom-flannel/Start 58.72
375 TestNetworkPlugins/group/calico/ControllerPod 6.01
376 TestNetworkPlugins/group/calico/KubeletFlags 0.4
377 TestNetworkPlugins/group/calico/NetCatPod 11.35
378 TestNetworkPlugins/group/calico/DNS 0.27
379 TestNetworkPlugins/group/calico/Localhost 0.17
380 TestNetworkPlugins/group/calico/HairPin 0.2
381 TestNetworkPlugins/group/enable-default-cni/Start 79.13
382 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
383 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.4
384 TestNetworkPlugins/group/custom-flannel/DNS 0.25
385 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
386 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
387 TestNetworkPlugins/group/flannel/Start 49.78
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
393 TestNetworkPlugins/group/flannel/ControllerPod 6.01
394 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
395 TestNetworkPlugins/group/flannel/NetCatPod 11.39
396 TestNetworkPlugins/group/flannel/DNS 0.27
397 TestNetworkPlugins/group/flannel/Localhost 0.2
398 TestNetworkPlugins/group/flannel/HairPin 0.19
399 TestNetworkPlugins/group/bridge/Start 48.2
400 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
401 TestNetworkPlugins/group/bridge/NetCatPod 9.27
402 TestNetworkPlugins/group/bridge/DNS 0.21
403 TestNetworkPlugins/group/bridge/Localhost 0.14
404 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (5.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-077296 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-077296 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.055297976s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 09:25:17.304889 3533147 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1227 09:25:17.304980 3533147 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-077296
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-077296: exit status 85 (93.786707ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-077296 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-077296 │ jenkins │ v1.37.0 │ 27 Dec 25 09:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:25:12
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:25:12.289944 3533153 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:25:12.290071 3533153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:25:12.290086 3533153 out.go:374] Setting ErrFile to fd 2...
	I1227 09:25:12.290092 3533153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:25:12.290358 3533153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	W1227 09:25:12.290496 3533153 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22343-3531265/.minikube/config/config.json: open /home/jenkins/minikube-integration/22343-3531265/.minikube/config/config.json: no such file or directory
	I1227 09:25:12.290909 3533153 out.go:368] Setting JSON to true
	I1227 09:25:12.291742 3533153 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":54465,"bootTime":1766773048,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 09:25:12.291814 3533153 start.go:143] virtualization:  
	I1227 09:25:12.297096 3533153 out.go:99] [download-only-077296] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1227 09:25:12.297300 3533153 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 09:25:12.297429 3533153 notify.go:221] Checking for updates...
	I1227 09:25:12.301458 3533153 out.go:171] MINIKUBE_LOCATION=22343
	I1227 09:25:12.305464 3533153 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:25:12.308944 3533153 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 09:25:12.312321 3533153 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 09:25:12.315623 3533153 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 09:25:12.321972 3533153 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:25:12.322362 3533153 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:25:12.345852 3533153 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:25:12.345962 3533153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:25:12.425265 3533153 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 09:25:12.415839965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:25:12.425371 3533153 docker.go:319] overlay module found
	I1227 09:25:12.428768 3533153 out.go:99] Using the docker driver based on user configuration
	I1227 09:25:12.428806 3533153 start.go:309] selected driver: docker
	I1227 09:25:12.428813 3533153 start.go:928] validating driver "docker" against <nil>
	I1227 09:25:12.428914 3533153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:25:12.482306 3533153 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 09:25:12.472674671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:25:12.482468 3533153 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:25:12.482739 3533153 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 09:25:12.482888 3533153 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:25:12.486279 3533153 out.go:171] Using Docker driver with root privileges
	I1227 09:25:12.489373 3533153 cni.go:84] Creating CNI manager for ""
	I1227 09:25:12.489445 3533153 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1227 09:25:12.489464 3533153 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I1227 09:25:12.489543 3533153 start.go:353] cluster config:
	{Name:download-only-077296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-077296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:25:12.492558 3533153 out.go:99] Starting "download-only-077296" primary control-plane node in "download-only-077296" cluster
	I1227 09:25:12.492586 3533153 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1227 09:25:12.495481 3533153 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:25:12.495530 3533153 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 09:25:12.495705 3533153 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:25:12.512141 3533153 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:25:12.512969 3533153 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:25:12.513083 3533153 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:25:12.548064 3533153 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 09:25:12.548094 3533153 cache.go:65] Caching tarball of preloaded images
	I1227 09:25:12.548316 3533153 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 09:25:12.551707 3533153 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 09:25:12.551744 3533153 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 09:25:12.551755 3533153 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1227 09:25:12.634661 3533153 preload.go:313] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1227 09:25:12.634796 3533153 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1227 09:25:16.645991 3533153 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1227 09:25:16.646499 3533153 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/download-only-077296/config.json ...
	I1227 09:25:16.646560 3533153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/download-only-077296/config.json: {Name:mkca36be83df5ded989616f048ec0bb0b92dfb7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:25:16.647490 3533153 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1227 09:25:16.649059 3533153 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-077296 host does not exist
	  To start a cluster, run: "minikube start -p download-only-077296"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-077296
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (4.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-697395 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-697395 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.311970256s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (4.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 09:25:22.070016 3533147 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I1227 09:25:22.070057 3533147 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-697395
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-697395: exit status 85 (157.809259ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-077296 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-077296 │ jenkins │ v1.37.0 │ 27 Dec 25 09:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 09:25 UTC │ 27 Dec 25 09:25 UTC │
	│ delete  │ -p download-only-077296                                                                                                                                                               │ download-only-077296 │ jenkins │ v1.37.0 │ 27 Dec 25 09:25 UTC │ 27 Dec 25 09:25 UTC │
	│ start   │ -o=json --download-only -p download-only-697395 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-697395 │ jenkins │ v1.37.0 │ 27 Dec 25 09:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:25:17
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:25:17.799993 3533352 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:25:17.800174 3533352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:25:17.800197 3533352 out.go:374] Setting ErrFile to fd 2...
	I1227 09:25:17.800216 3533352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:25:17.800505 3533352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:25:17.800967 3533352 out.go:368] Setting JSON to true
	I1227 09:25:17.802123 3533352 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":54470,"bootTime":1766773048,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 09:25:17.802356 3533352 start.go:143] virtualization:  
	I1227 09:25:17.805950 3533352 out.go:99] [download-only-697395] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:25:17.806541 3533352 notify.go:221] Checking for updates...
	I1227 09:25:17.809120 3533352 out.go:171] MINIKUBE_LOCATION=22343
	I1227 09:25:17.812081 3533352 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:25:17.815056 3533352 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 09:25:17.818105 3533352 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 09:25:17.821095 3533352 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 09:25:17.826899 3533352 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:25:17.827216 3533352 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:25:17.857980 3533352 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:25:17.858085 3533352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:25:17.912868 3533352 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 09:25:17.903633393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:25:17.912993 3533352 docker.go:319] overlay module found
	I1227 09:25:17.915991 3533352 out.go:99] Using the docker driver based on user configuration
	I1227 09:25:17.916039 3533352 start.go:309] selected driver: docker
	I1227 09:25:17.916047 3533352 start.go:928] validating driver "docker" against <nil>
	I1227 09:25:17.916173 3533352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:25:17.971752 3533352 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 09:25:17.962013216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:25:17.971915 3533352 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:25:17.972192 3533352 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 09:25:17.972349 3533352 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:25:17.975681 3533352 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-697395 host does not exist
	  To start a cluster, run: "minikube start -p download-only-697395"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-697395
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 09:25:23.935093 3533147 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-939842 --alsologtostderr --binary-mirror http://127.0.0.1:44351 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-939842" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-939842
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-888652
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-888652: exit status 85 (74.606746ms)

                                                
                                                
-- stdout --
	* Profile "addons-888652" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-888652"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-888652
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-888652: exit status 85 (70.021424ms)

                                                
                                                
-- stdout --
	* Profile "addons-888652" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-888652"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (123.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-888652 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-888652 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.947294607s)
--- PASS: TestAddons/Setup (123.95s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.63s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 48.039762ms
addons_test.go:878: volcano-admission stabilized in 48.23554ms
addons_test.go:886: volcano-controller stabilized in 48.695723ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-6c7b5cd66b-9b86l" [fc6a6b5d-ffbb-495d-b4d2-781d851b7bb7] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003601602s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-7f4844c49c-72s4z" [44a71817-3d3c-41e0-b3e6-ef24dcdd6136] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00372209s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-8f57bcd69-fmjkl" [7afc60c7-7dad-491c-8762-ebb2984dcb37] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003874027s
addons_test.go:905: (dbg) Run:  kubectl --context addons-888652 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-888652 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-888652 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [73ddcdeb-dd9e-4bfe-b69e-a50d47f53568] Pending
helpers_test.go:353: "test-job-nginx-0" [73ddcdeb-dd9e-4bfe-b69e-a50d47f53568] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [73ddcdeb-dd9e-4bfe-b69e-a50d47f53568] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003668193s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable volcano --alsologtostderr -v=1: (11.815336158s)
--- PASS: TestAddons/serial/Volcano (39.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-888652 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-888652 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-888652 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-888652 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5a241ccf-c5a6-4605-befa-47882b80d619] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5a241ccf-c5a6-4605-befa-47882b80d619] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.005106657s
addons_test.go:696: (dbg) Run:  kubectl --context addons-888652 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-888652 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-888652 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-888652 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.652606ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-5qsrm" [e95ad7e9-07bb-4d6c-9101-b042b17773da] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003670379s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-69ghf" [a5b815a6-0187-45b6-a58d-b2c588dd9042] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00422664s
addons_test.go:394: (dbg) Run:  kubectl --context addons-888652 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-888652 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-888652 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.766118407s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 ip
2025/12/27 09:28:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.81s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.257244ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-888652
addons_test.go:334: (dbg) Run:  kubectl --context addons-888652 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-888652 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-888652 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-888652 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [8a042519-84f7-4434-8940-1e635bd63b28] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [8a042519-84f7-4434-8940-1e635bd63b28] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00287673s
I1227 09:30:03.460219 3533147 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-888652 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable ingress-dns --alsologtostderr -v=1: (1.530131834s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable ingress --alsologtostderr -v=1: (7.834678603s)
--- PASS: TestAddons/parallel/Ingress (18.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.06s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-qh5rb" [28a6a1f5-16f4-4597-b91e-0ce1466b53a0] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.067204352s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable inspektor-gadget --alsologtostderr -v=1: (5.993180173s)
--- PASS: TestAddons/parallel/InspektorGadget (11.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.492656ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-97b2t" [241f764f-3c16-4bff-a372-d2f8ef6c1d4a] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003217182s
addons_test.go:465: (dbg) Run:  kubectl --context addons-888652 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 09:29:10.139582 3533147 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 09:29:10.143648 3533147 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 09:29:10.143684 3533147 kapi.go:107] duration metric: took 8.780878ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.794039ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-888652 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-888652 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [7840a674-3030-4925-94bb-e6483b7d553d] Pending
helpers_test.go:353: "task-pv-pod" [7840a674-3030-4925-94bb-e6483b7d553d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [7840a674-3030-4925-94bb-e6483b7d553d] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003232552s
addons_test.go:574: (dbg) Run:  kubectl --context addons-888652 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-888652 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-888652 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-888652 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-888652 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-888652 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-888652 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [568554c6-d4ff-4541-a734-60a98cfb34c2] Pending
helpers_test.go:353: "task-pv-pod-restore" [568554c6-d4ff-4541-a734-60a98cfb34c2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [568554c6-d4ff-4541-a734-60a98cfb34c2] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004213107s
addons_test.go:616: (dbg) Run:  kubectl --context addons-888652 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-888652 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-888652 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable volumesnapshots --alsologtostderr -v=1: (1.214718914s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.946260173s)
--- PASS: TestAddons/parallel/CSI (47.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-888652 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-888652 --alsologtostderr -v=1: (1.099204791s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-kjnrd" [00b92ff4-9cbd-44ab-9ed4-3ff789651abd] Pending
helpers_test.go:353: "headlamp-6d8d595f-kjnrd" [00b92ff4-9cbd-44ab-9ed4-3ff789651abd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-kjnrd" [00b92ff4-9cbd-44ab-9ed4-3ff789651abd] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00440116s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable headlamp --alsologtostderr -v=1: (5.83358432s)
--- PASS: TestAddons/parallel/Headlamp (17.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-vdtnx" [e8b4c261-b1f8-40ee-89ed-42ddc725c034] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004109416s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-888652 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-888652 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-888652 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [9f6da1c0-54e3-4fa0-9d3c-707155a371b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [9f6da1c0-54e3-4fa0-9d3c-707155a371b9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [9f6da1c0-54e3-4fa0-9d3c-707155a371b9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003655339s
addons_test.go:969: (dbg) Run:  kubectl --context addons-888652 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 ssh "cat /opt/local-path-provisioner/pvc-1916529b-54fa-4fad-8cc6-2735d7beac64_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-888652 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-888652 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.077415168s)
--- PASS: TestAddons/parallel/LocalPath (52.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.11s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-5hv6l" [b5b51b62-dbf4-4360-aa05-77c64cf645f4] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.017162564s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.088180837s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.11s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-c7vvw" [c72fefb8-d419-4177-87bc-74952b17ab0a] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003293156s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-888652 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-888652 addons disable yakd --alsologtostderr -v=1: (5.915333862s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-888652
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-888652: (12.082181221s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-888652
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-888652
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-888652
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (30.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-838902 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1227 10:12:28.761156 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-838902 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (27.419201464s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-838902 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-838902 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-838902 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-838902" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-838902
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-838902: (2.084877995s)
--- PASS: TestCertOptions (30.16s)

                                                
                                    
x
+
TestCertExpiration (216.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-435404 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-435404 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (27.50839336s)
E1227 10:07:28.764432 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:08:52.119704 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-435404 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-435404 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.35665972s)
helpers_test.go:176: Cleaning up "cert-expiration-435404" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-435404
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-435404: (2.359154472s)
--- PASS: TestCertExpiration (216.23s)

                                                
                                    
x
+
TestDockerEnvContainerd (42.6s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-241568 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-241568 --driver=docker  --container-runtime=containerd: (26.814748483s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-241568"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-241568": (1.098241274s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-2fwLSStwxVxh/agent.3552849" SSH_AGENT_PID="3552850" DOCKER_HOST=ssh://docker@127.0.0.1:35955 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-2fwLSStwxVxh/agent.3552849" SSH_AGENT_PID="3552850" DOCKER_HOST=ssh://docker@127.0.0.1:35955 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-2fwLSStwxVxh/agent.3552849" SSH_AGENT_PID="3552850" DOCKER_HOST=ssh://docker@127.0.0.1:35955 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.297200915s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-2fwLSStwxVxh/agent.3552849" SSH_AGENT_PID="3552850" DOCKER_HOST=ssh://docker@127.0.0.1:35955 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-241568" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-241568
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-241568: (2.441499552s)
--- PASS: TestDockerEnvContainerd (42.60s)

                                                
                                    
x
+
TestErrorSpam/setup (25.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-383828 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-383828 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-383828 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-383828 --driver=docker  --container-runtime=containerd: (25.82960445s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (25.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 status
--- PASS: TestErrorSpam/status (1.22s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 unpause
--- PASS: TestErrorSpam/unpause (2.06s)

                                                
                                    
x
+
TestErrorSpam/stop (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 stop: (1.438250212s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-383828 --log_dir /tmp/nospam-383828 stop
--- PASS: TestErrorSpam/stop (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22343-3531265/.minikube/files/etc/test/nested/copy/3533147/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-237950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1227 09:32:28.764940 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:28.770858 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:28.781300 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:28.801633 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:28.841950 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:28.922291 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:29.082756 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:29.403394 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:30.044468 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:31.324711 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:33.885556 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-237950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (45.450033771s)
--- PASS: TestFunctional/serial/StartWithProxy (45.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 09:32:37.462553 3533147 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-237950 --alsologtostderr -v=8
E1227 09:32:39.006268 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-237950 --alsologtostderr -v=8: (7.149900188s)
functional_test.go:678: soft start took 7.154086153s for "functional-237950" cluster.
I1227 09:32:44.613187 3533147 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (7.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-237950 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 cache add registry.k8s.io/pause:3.1: (1.347888424s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 cache add registry.k8s.io/pause:3.3: (1.146055886s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 cache add registry.k8s.io/pause:latest: (1.081988621s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-237950 /tmp/TestFunctionalserialCacheCmdcacheadd_local3688092765/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cache add minikube-local-cache-test:functional-237950
E1227 09:32:49.247005 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cache delete minikube-local-cache-test:functional-237950
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-237950
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.018285ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 kubectl -- --context functional-237950 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-237950 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-237950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1227 09:33:09.727286 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-237950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.949656001s)
functional_test.go:776: restart took 46.9497503s for "functional-237950" cluster.
I1227 09:33:39.266110 3533147 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (46.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-237950 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 logs: (1.497176271s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 logs --file /tmp/TestFunctionalserialLogsFileCmd3839548091/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 logs --file /tmp/TestFunctionalserialLogsFileCmd3839548091/001/logs.txt: (1.569236176s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-237950 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-237950
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-237950: exit status 115 (782.655648ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32665 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-237950 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 config get cpus: exit status 14 (81.380457ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 config get cpus: exit status 14 (62.106291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-237950 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-237950 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 3569326: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-237950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-237950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (188.314483ms)

                                                
                                                
-- stdout --
	* [functional-237950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:34:23.344391 3568854 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:34:23.344546 3568854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:34:23.344559 3568854 out.go:374] Setting ErrFile to fd 2...
	I1227 09:34:23.344564 3568854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:34:23.345354 3568854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:34:23.346031 3568854 out.go:368] Setting JSON to false
	I1227 09:34:23.347106 3568854 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":55016,"bootTime":1766773048,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 09:34:23.347299 3568854 start.go:143] virtualization:  
	I1227 09:34:23.350679 3568854 out.go:179] * [functional-237950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:34:23.353250 3568854 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:34:23.353329 3568854 notify.go:221] Checking for updates...
	I1227 09:34:23.356979 3568854 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:34:23.360037 3568854 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 09:34:23.362925 3568854 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 09:34:23.366656 3568854 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:34:23.369434 3568854 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:34:23.372909 3568854 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:34:23.373649 3568854 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:34:23.402079 3568854 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:34:23.402193 3568854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:34:23.459372 3568854 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:34:23.449503613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:34:23.459534 3568854 docker.go:319] overlay module found
	I1227 09:34:23.462682 3568854 out.go:179] * Using the docker driver based on existing profile
	I1227 09:34:23.465524 3568854 start.go:309] selected driver: docker
	I1227 09:34:23.465546 3568854 start.go:928] validating driver "docker" against &{Name:functional-237950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-237950 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:34:23.465658 3568854 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:34:23.469035 3568854 out.go:203] 
	W1227 09:34:23.471844 3568854 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 09:34:23.474780 3568854 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-237950 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-237950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-237950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (224.058658ms)

                                                
                                                
-- stdout --
	* [functional-237950] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:34:23.798707 3568973 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:34:23.800763 3568973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:34:23.800777 3568973 out.go:374] Setting ErrFile to fd 2...
	I1227 09:34:23.800784 3568973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:34:23.801893 3568973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:34:23.802530 3568973 out.go:368] Setting JSON to false
	I1227 09:34:23.803517 3568973 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":55016,"bootTime":1766773048,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 09:34:23.803598 3568973 start.go:143] virtualization:  
	I1227 09:34:23.806792 3568973 out.go:179] * [functional-237950] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1227 09:34:23.810580 3568973 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:34:23.810780 3568973 notify.go:221] Checking for updates...
	I1227 09:34:23.817088 3568973 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:34:23.819945 3568973 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 09:34:23.822735 3568973 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 09:34:23.825515 3568973 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:34:23.828414 3568973 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:34:23.831648 3568973 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:34:23.832244 3568973 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:34:23.862454 3568973 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:34:23.862569 3568973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:34:23.946330 3568973 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:34:23.931320139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:34:23.946432 3568973 docker.go:319] overlay module found
	I1227 09:34:23.949384 3568973 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 09:34:23.952141 3568973 start.go:309] selected driver: docker
	I1227 09:34:23.952166 3568973 start.go:928] validating driver "docker" against &{Name:functional-237950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-237950 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:34:23.952289 3568973 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:34:23.955692 3568973 out.go:203] 
	W1227 09:34:23.958474 3568973 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 09:34:23.961282 3568973 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-237950 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-237950 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-f58t7" [25b79825-de11-4460-b6f5-3579a5da5322] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-f58t7" [25b79825-de11-4460-b6f5-3579a5da5322] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003797175s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30544
functional_test.go:1685: http://192.168.49.2:30544: success! body:
Request served by hello-node-connect-5d95464fd4-f58t7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30544
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [16ee1da1-8726-429f-8f68-3599860425ce] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004135316s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-237950 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-237950 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-237950 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-237950 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [16c0da2a-e31e-4b04-9d45-df4f28f85c9b] Pending
helpers_test.go:353: "sp-pod" [16c0da2a-e31e-4b04-9d45-df4f28f85c9b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [16c0da2a-e31e-4b04-9d45-df4f28f85c9b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003841173s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-237950 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-237950 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-237950 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [5a2cf258-7222-4ade-a322-f52531b0ca95] Pending
helpers_test.go:353: "sp-pod" [5a2cf258-7222-4ade-a322-f52531b0ca95] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00408941s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-237950 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh -n functional-237950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cp functional-237950:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd547932841/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh -n functional-237950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh -n functional-237950 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/3533147/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo cat /etc/test/nested/copy/3533147/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/3533147.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo cat /etc/ssl/certs/3533147.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/3533147.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo cat /usr/share/ca-certificates/3533147.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/35331472.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo cat /etc/ssl/certs/35331472.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/35331472.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo cat /usr/share/ca-certificates/35331472.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-237950 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 ssh "sudo systemctl is-active docker": exit status 1 (339.872929ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 ssh "sudo systemctl is-active crio": exit status 1 (360.597692ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 version -o=json --components: (1.275922071s)
--- PASS: TestFunctional/parallel/Version/components (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-237950 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-237950
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-237950 image ls --format short --alsologtostderr:
I1227 09:34:33.146643 3570663 out.go:360] Setting OutFile to fd 1 ...
I1227 09:34:33.146886 3570663 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.147610 3570663 out.go:374] Setting ErrFile to fd 2...
I1227 09:34:33.147659 3570663 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.148211 3570663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
I1227 09:34:33.155262 3570663 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.155555 3570663 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.156356 3570663 cli_runner.go:164] Run: docker container inspect functional-237950 --format={{.State.Status}}
I1227 09:34:33.182778 3570663 ssh_runner.go:195] Run: systemctl --version
I1227 09:34:33.182840 3570663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237950
I1227 09:34:33.206309 3570663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35965 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/functional-237950/id_rsa Username:docker}
I1227 09:34:33.311556 3570663 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-237950 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ sha256:88898f │ 20.7MB │
│ docker.io/library/minikube-local-cache-test       │ functional-237950                     │ sha256:325cc4 │ 993B   │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ sha256:962dbb │ 23MB   │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/pause                             │ 3.3                                   │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                             │ latest                                │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ sha256:c3fcf2 │ 24.7MB │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ sha256:ddc842 │ 15.4MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-237950                     │ sha256:ce2d2c │ 2.17MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ sha256:de369f │ 22.4MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-237950 image ls --format table --alsologtostderr:
I1227 09:34:33.814720 3570853 out.go:360] Setting OutFile to fd 1 ...
I1227 09:34:33.815673 3570853 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.815774 3570853 out.go:374] Setting ErrFile to fd 2...
I1227 09:34:33.815817 3570853 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.816324 3570853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
I1227 09:34:33.817203 3570853 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.817376 3570853 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.818595 3570853 cli_runner.go:164] Run: docker container inspect functional-237950 --format={{.State.Status}}
I1227 09:34:33.857547 3570853 ssh_runner.go:195] Run: systemctl --version
I1227 09:34:33.857606 3570853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237950
I1227 09:34:33.885097 3570853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35965 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/functional-237950/id_rsa Username:docker}
I1227 09:34:34.002765 3570853 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-237950 image ls --format json --alsologtostderr:
[{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"20672243"},{"id":"sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79
e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"22432091"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size
":"8034419"},{"id":"sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"15405198"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d8463
0fb86a836a480f02752a779eb85c1489f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"24692295"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:325cc491e52655f7435b4f0781175fde47ee0cf6ea946b8b561b308aa1a93ca7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-237950"],"size":"993"},{"id":"sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:
alpine"],"size":"22987510"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"21749640"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-237950 image ls --format json --alsologtostderr:
I1227 09:34:33.520110 3570773 out.go:360] Setting OutFile to fd 1 ...
I1227 09:34:33.520242 3570773 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.520256 3570773 out.go:374] Setting ErrFile to fd 2...
I1227 09:34:33.520262 3570773 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.520856 3570773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
I1227 09:34:33.521908 3570773 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.522062 3570773 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.522829 3570773 cli_runner.go:164] Run: docker container inspect functional-237950 --format={{.State.Status}}
I1227 09:34:33.548506 3570773 ssh_runner.go:195] Run: systemctl --version
I1227 09:34:33.548611 3570773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237950
I1227 09:34:33.570075 3570773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35965 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/functional-237950/id_rsa Username:docker}
I1227 09:34:33.674419 3570773 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-237950 image ls --format yaml --alsologtostderr:
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "2173567"
- id: sha256:962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22987510"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "24692295"
- id: sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "20672243"
- id: sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "15405198"
- id: sha256:325cc491e52655f7435b4f0781175fde47ee0cf6ea946b8b561b308aa1a93ca7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-237950
size: "993"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "22432091"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-237950 image ls --format yaml --alsologtostderr:
I1227 09:34:33.247533 3570695 out.go:360] Setting OutFile to fd 1 ...
I1227 09:34:33.247738 3570695 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.247796 3570695 out.go:374] Setting ErrFile to fd 2...
I1227 09:34:33.247816 3570695 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.248124 3570695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
I1227 09:34:33.248799 3570695 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.248980 3570695 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.249567 3570695 cli_runner.go:164] Run: docker container inspect functional-237950 --format={{.State.Status}}
I1227 09:34:33.271918 3570695 ssh_runner.go:195] Run: systemctl --version
I1227 09:34:33.271987 3570695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237950
I1227 09:34:33.289778 3570695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35965 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/functional-237950/id_rsa Username:docker}
I1227 09:34:33.394083 3570695 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 ssh pgrep buildkitd: exit status 1 (382.189566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image build -t localhost/my-image:functional-237950 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 image build -t localhost/my-image:functional-237950 testdata/build --alsologtostderr: (3.364728219s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-237950 image build -t localhost/my-image:functional-237950 testdata/build --alsologtostderr:
I1227 09:34:33.784686 3570858 out.go:360] Setting OutFile to fd 1 ...
I1227 09:34:33.786109 3570858 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.786147 3570858 out.go:374] Setting ErrFile to fd 2...
I1227 09:34:33.786171 3570858 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:34:33.786480 3570858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
I1227 09:34:33.787316 3570858 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.788647 3570858 config.go:182] Loaded profile config "functional-237950": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I1227 09:34:33.789220 3570858 cli_runner.go:164] Run: docker container inspect functional-237950 --format={{.State.Status}}
I1227 09:34:33.818035 3570858 ssh_runner.go:195] Run: systemctl --version
I1227 09:34:33.818083 3570858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237950
I1227 09:34:33.841935 3570858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35965 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/functional-237950/id_rsa Username:docker}
I1227 09:34:33.942055 3570858 build_images.go:162] Building image from path: /tmp/build.1732667287.tar
I1227 09:34:33.942135 3570858 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 09:34:33.953381 3570858 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1732667287.tar
I1227 09:34:33.957397 3570858 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1732667287.tar: stat -c "%s %y" /var/lib/minikube/build/build.1732667287.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1732667287.tar': No such file or directory
I1227 09:34:33.957429 3570858 ssh_runner.go:362] scp /tmp/build.1732667287.tar --> /var/lib/minikube/build/build.1732667287.tar (3072 bytes)
I1227 09:34:33.978699 3570858 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1732667287
I1227 09:34:33.987736 3570858 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1732667287 -xf /var/lib/minikube/build/build.1732667287.tar
I1227 09:34:33.997063 3570858 containerd.go:402] Building image: /var/lib/minikube/build/build.1732667287
I1227 09:34:33.997144 3570858 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1732667287 --local dockerfile=/var/lib/minikube/build/build.1732667287 --output type=image,name=localhost/my-image:functional-237950
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:414285351e830ad3cdd3140c6744cb740fc6cbf54273487a0d1d7b4964bf01bd
#8 exporting manifest sha256:414285351e830ad3cdd3140c6744cb740fc6cbf54273487a0d1d7b4964bf01bd 0.0s done
#8 exporting config sha256:c16cdc9a3b224496c2c3ad4be68a9e11bd3e90273df596343d14aa8cf6793156 0.0s done
#8 naming to localhost/my-image:functional-237950 done
#8 DONE 0.2s
I1227 09:34:37.068970 3570858 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1732667287 --local dockerfile=/var/lib/minikube/build/build.1732667287 --output type=image,name=localhost/my-image:functional-237950: (3.071794762s)
I1227 09:34:37.069060 3570858 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1732667287
I1227 09:34:37.078789 3570858 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1732667287.tar
I1227 09:34:37.087368 3570858 build_images.go:218] Built localhost/my-image:functional-237950 from /tmp/build.1732667287.tar
I1227 09:34:37.087405 3570858 build_images.go:134] succeeded building to: functional-237950
I1227 09:34:37.087412 3570858 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 update-context --alsologtostderr -v=2
2025/12/27 09:34:32 [DEBUG] GET http://127.0.0.1:36987/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950 --alsologtostderr: (1.187950328s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls
E1227 09:33:50.688035 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-237950 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950 --alsologtostderr: (1.251183981s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-237950 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-237950 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-bzgj6" [e810e5c8-6516-4e9f-84e0-b4896091ddec] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-bzgj6" [e810e5c8-6516-4e9f-84e0-b4896091ddec] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003891845s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-237950 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-237950 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-237950 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 3566483: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-237950 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-237950 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-237950 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [84f33609-6a4d-49f9-8a33-46de6886bb6e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [84f33609-6a4d-49f9-8a33-46de6886bb6e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003939326s
I1227 09:34:06.022385 3533147 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 service list -o json
functional_test.go:1509: Took "373.712209ms" to run "out/minikube-linux-arm64 -p functional-237950 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30585
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30585
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-237950 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.245.161 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-237950 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "424.597271ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "96.680808ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "399.366748ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "81.56438ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdany-port43926573/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766828057633661585" to /tmp/TestFunctionalparallelMountCmdany-port43926573/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766828057633661585" to /tmp/TestFunctionalparallelMountCmdany-port43926573/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766828057633661585" to /tmp/TestFunctionalparallelMountCmdany-port43926573/001/test-1766828057633661585
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (511.954876ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:34:18.147302 3533147 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 09:34 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 09:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 09:34 test-1766828057633661585
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh cat /mount-9p/test-1766828057633661585
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-237950 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [98814b30-03b2-4ca9-a3f4-e76c6e71fd7a] Pending
helpers_test.go:353: "busybox-mount" [98814b30-03b2-4ca9-a3f4-e76c6e71fd7a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [98814b30-03b2-4ca9-a3f4-e76c6e71fd7a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [98814b30-03b2-4ca9-a3f4-e76c6e71fd7a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.0040008s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-237950 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdany-port43926573/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdspecific-port2063894078/001:/mount-9p --alsologtostderr -v=1 --port 45801]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (420.86085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:34:26.637079 3533147 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdspecific-port2063894078/001:/mount-9p --alsologtostderr -v=1 --port 45801] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 ssh "sudo umount -f /mount-9p": exit status 1 (328.139238ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-237950 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdspecific-port2063894078/001:/mount-9p --alsologtostderr -v=1 --port 45801] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2408569094/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2408569094/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2408569094/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T" /mount1: exit status 1 (934.482655ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-237950 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-237950 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2408569094/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2408569094/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-237950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2408569094/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.85s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-237950
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-237950
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-237950
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (179.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1227 09:35:12.608649 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:37:28.761775 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m58.485445142s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (179.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 kubectl -- rollout status deployment/busybox: (4.137172421s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-64p8d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-fh8p9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-kvvrc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-64p8d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-fh8p9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-kvvrc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-64p8d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-fh8p9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-kvvrc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-64p8d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-64p8d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-fh8p9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-fh8p9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-kvvrc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 kubectl -- exec busybox-769dd8b7dd-kvvrc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 node add --alsologtostderr -v 5
E1227 09:37:56.448960 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 node add --alsologtostderr -v 5: (29.069753346s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5: (1.075717014s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-299220 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.102814764s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 status --output json --alsologtostderr -v 5: (1.074948349s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp testdata/cp-test.txt ha-299220:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3604643462/001/cp-test_ha-299220.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220:/home/docker/cp-test.txt ha-299220-m02:/home/docker/cp-test_ha-299220_ha-299220-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m02 "sudo cat /home/docker/cp-test_ha-299220_ha-299220-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220:/home/docker/cp-test.txt ha-299220-m03:/home/docker/cp-test_ha-299220_ha-299220-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m03 "sudo cat /home/docker/cp-test_ha-299220_ha-299220-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220:/home/docker/cp-test.txt ha-299220-m04:/home/docker/cp-test_ha-299220_ha-299220-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m04 "sudo cat /home/docker/cp-test_ha-299220_ha-299220-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp testdata/cp-test.txt ha-299220-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3604643462/001/cp-test_ha-299220-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m02:/home/docker/cp-test.txt ha-299220:/home/docker/cp-test_ha-299220-m02_ha-299220.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220 "sudo cat /home/docker/cp-test_ha-299220-m02_ha-299220.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m02:/home/docker/cp-test.txt ha-299220-m03:/home/docker/cp-test_ha-299220-m02_ha-299220-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m03 "sudo cat /home/docker/cp-test_ha-299220-m02_ha-299220-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m02:/home/docker/cp-test.txt ha-299220-m04:/home/docker/cp-test_ha-299220-m02_ha-299220-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m04 "sudo cat /home/docker/cp-test_ha-299220-m02_ha-299220-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp testdata/cp-test.txt ha-299220-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3604643462/001/cp-test_ha-299220-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m03:/home/docker/cp-test.txt ha-299220:/home/docker/cp-test_ha-299220-m03_ha-299220.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220 "sudo cat /home/docker/cp-test_ha-299220-m03_ha-299220.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m03:/home/docker/cp-test.txt ha-299220-m02:/home/docker/cp-test_ha-299220-m03_ha-299220-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m02 "sudo cat /home/docker/cp-test_ha-299220-m03_ha-299220-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m03:/home/docker/cp-test.txt ha-299220-m04:/home/docker/cp-test_ha-299220-m03_ha-299220-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m04 "sudo cat /home/docker/cp-test_ha-299220-m03_ha-299220-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp testdata/cp-test.txt ha-299220-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3604643462/001/cp-test_ha-299220-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m04:/home/docker/cp-test.txt ha-299220:/home/docker/cp-test_ha-299220-m04_ha-299220.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220 "sudo cat /home/docker/cp-test_ha-299220-m04_ha-299220.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m04:/home/docker/cp-test.txt ha-299220-m02:/home/docker/cp-test_ha-299220-m04_ha-299220-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m02 "sudo cat /home/docker/cp-test_ha-299220-m04_ha-299220-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 cp ha-299220-m04:/home/docker/cp-test.txt ha-299220-m03:/home/docker/cp-test_ha-299220-m04_ha-299220-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 ssh -n ha-299220-m03 "sudo cat /home/docker/cp-test_ha-299220-m04_ha-299220-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 node stop m02 --alsologtostderr -v 5: (12.208935597s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5
E1227 09:38:52.119172 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:52.124522 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:52.135153 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:52.155426 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:52.195701 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:52.276123 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:52.436477 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:52.757161 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5: exit status 7 (797.108591ms)

                                                
                                                
-- stdout --
	ha-299220
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-299220-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-299220-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-299220-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:38:52.046653 3587310 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:38:52.046765 3587310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:38:52.046771 3587310 out.go:374] Setting ErrFile to fd 2...
	I1227 09:38:52.046777 3587310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:38:52.047432 3587310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:38:52.047704 3587310 out.go:368] Setting JSON to false
	I1227 09:38:52.047727 3587310 mustload.go:66] Loading cluster: ha-299220
	I1227 09:38:52.048418 3587310 config.go:182] Loaded profile config "ha-299220": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:38:52.048430 3587310 status.go:174] checking status of ha-299220 ...
	I1227 09:38:52.049169 3587310 cli_runner.go:164] Run: docker container inspect ha-299220 --format={{.State.Status}}
	I1227 09:38:52.050798 3587310 notify.go:221] Checking for updates...
	I1227 09:38:52.073135 3587310 status.go:371] ha-299220 host status = "Running" (err=<nil>)
	I1227 09:38:52.073161 3587310 host.go:66] Checking if "ha-299220" exists ...
	I1227 09:38:52.073485 3587310 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-299220
	I1227 09:38:52.102404 3587310 host.go:66] Checking if "ha-299220" exists ...
	I1227 09:38:52.102767 3587310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:38:52.102822 3587310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-299220
	I1227 09:38:52.123126 3587310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35970 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/ha-299220/id_rsa Username:docker}
	I1227 09:38:52.224415 3587310 ssh_runner.go:195] Run: systemctl --version
	I1227 09:38:52.231060 3587310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:38:52.244856 3587310 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:38:52.330104 3587310 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-27 09:38:52.316829217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:38:52.330796 3587310 kubeconfig.go:125] found "ha-299220" server: "https://192.168.49.254:8443"
	I1227 09:38:52.330861 3587310 api_server.go:166] Checking apiserver status ...
	I1227 09:38:52.330909 3587310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:38:52.344056 3587310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1406/cgroup
	I1227 09:38:52.352416 3587310 api_server.go:192] apiserver freezer: "5:freezer:/docker/3e18e590224356548d6d8981da65c9eabf04ab4b38c7b00625eef26e9f3355e3/kubepods/burstable/pod1c6fb01faf4d50b10aabc70a5b909dc6/189a208612758f3e3fe84a8b77a18033337dc6a68b5cfcde9b88f1f6c13c05cf"
	I1227 09:38:52.352494 3587310 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e18e590224356548d6d8981da65c9eabf04ab4b38c7b00625eef26e9f3355e3/kubepods/burstable/pod1c6fb01faf4d50b10aabc70a5b909dc6/189a208612758f3e3fe84a8b77a18033337dc6a68b5cfcde9b88f1f6c13c05cf/freezer.state
	I1227 09:38:52.360556 3587310 api_server.go:214] freezer state: "THAWED"
	I1227 09:38:52.360588 3587310 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:38:52.368788 3587310 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:38:52.368821 3587310 status.go:463] ha-299220 apiserver status = Running (err=<nil>)
	I1227 09:38:52.368835 3587310 status.go:176] ha-299220 status: &{Name:ha-299220 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:38:52.368852 3587310 status.go:174] checking status of ha-299220-m02 ...
	I1227 09:38:52.369233 3587310 cli_runner.go:164] Run: docker container inspect ha-299220-m02 --format={{.State.Status}}
	I1227 09:38:52.387918 3587310 status.go:371] ha-299220-m02 host status = "Stopped" (err=<nil>)
	I1227 09:38:52.387944 3587310 status.go:384] host is not running, skipping remaining checks
	I1227 09:38:52.387951 3587310 status.go:176] ha-299220-m02 status: &{Name:ha-299220-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:38:52.387978 3587310 status.go:174] checking status of ha-299220-m03 ...
	I1227 09:38:52.388322 3587310 cli_runner.go:164] Run: docker container inspect ha-299220-m03 --format={{.State.Status}}
	I1227 09:38:52.416588 3587310 status.go:371] ha-299220-m03 host status = "Running" (err=<nil>)
	I1227 09:38:52.416616 3587310 host.go:66] Checking if "ha-299220-m03" exists ...
	I1227 09:38:52.416979 3587310 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-299220-m03
	I1227 09:38:52.436353 3587310 host.go:66] Checking if "ha-299220-m03" exists ...
	I1227 09:38:52.436696 3587310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:38:52.436744 3587310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-299220-m03
	I1227 09:38:52.454803 3587310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35980 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/ha-299220-m03/id_rsa Username:docker}
	I1227 09:38:52.554531 3587310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:38:52.569233 3587310 kubeconfig.go:125] found "ha-299220" server: "https://192.168.49.254:8443"
	I1227 09:38:52.569261 3587310 api_server.go:166] Checking apiserver status ...
	I1227 09:38:52.569308 3587310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:38:52.581974 3587310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1262/cgroup
	I1227 09:38:52.590415 3587310 api_server.go:192] apiserver freezer: "5:freezer:/docker/a36e889406cbe56b05abde6a6c561a8ea8d19027ab9ef26dca4cc80fe9d95655/kubepods/burstable/pod3ffc97a8a9cfffe05522c5d03d376487/798ea0816a45b4b00aa711733809233f65d2ab657184cd5a5fa2360fff0e9c21"
	I1227 09:38:52.590514 3587310 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a36e889406cbe56b05abde6a6c561a8ea8d19027ab9ef26dca4cc80fe9d95655/kubepods/burstable/pod3ffc97a8a9cfffe05522c5d03d376487/798ea0816a45b4b00aa711733809233f65d2ab657184cd5a5fa2360fff0e9c21/freezer.state
	I1227 09:38:52.598402 3587310 api_server.go:214] freezer state: "THAWED"
	I1227 09:38:52.598431 3587310 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:38:52.606603 3587310 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:38:52.606634 3587310 status.go:463] ha-299220-m03 apiserver status = Running (err=<nil>)
	I1227 09:38:52.606644 3587310 status.go:176] ha-299220-m03 status: &{Name:ha-299220-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:38:52.606660 3587310 status.go:174] checking status of ha-299220-m04 ...
	I1227 09:38:52.607012 3587310 cli_runner.go:164] Run: docker container inspect ha-299220-m04 --format={{.State.Status}}
	I1227 09:38:52.626402 3587310 status.go:371] ha-299220-m04 host status = "Running" (err=<nil>)
	I1227 09:38:52.626431 3587310 host.go:66] Checking if "ha-299220-m04" exists ...
	I1227 09:38:52.626768 3587310 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-299220-m04
	I1227 09:38:52.646253 3587310 host.go:66] Checking if "ha-299220-m04" exists ...
	I1227 09:38:52.646672 3587310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:38:52.646742 3587310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-299220-m04
	I1227 09:38:52.665787 3587310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35985 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/ha-299220-m04/id_rsa Username:docker}
	I1227 09:38:52.769066 3587310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:38:52.783752 3587310 status.go:176] ha-299220-m04 status: &{Name:ha-299220-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1227 09:38:53.397896 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 node start m02 --alsologtostderr -v 5
E1227 09:38:54.678410 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:38:57.239198 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:39:02.360223 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 node start m02 --alsologtostderr -v 5: (12.208251526s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5: (1.283478685s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.157212788s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 stop --alsologtostderr -v 5
E1227 09:39:12.601246 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:39:33.081472 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 stop --alsologtostderr -v 5: (37.701538995s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 start --wait true --alsologtostderr -v 5
E1227 09:40:14.042515 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 start --wait true --alsologtostderr -v 5: (1m0.998505309s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 node delete m03 --alsologtostderr -v 5: (9.651686383s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 stop --alsologtostderr -v 5: (36.44972384s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5: exit status 7 (114.972253ms)

                                                
                                                
-- stdout --
	ha-299220
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-299220-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-299220-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:41:35.284199 3602102 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:41:35.284342 3602102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:41:35.284356 3602102 out.go:374] Setting ErrFile to fd 2...
	I1227 09:41:35.284363 3602102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:41:35.284627 3602102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:41:35.284855 3602102 out.go:368] Setting JSON to false
	I1227 09:41:35.284899 3602102 mustload.go:66] Loading cluster: ha-299220
	I1227 09:41:35.285007 3602102 notify.go:221] Checking for updates...
	I1227 09:41:35.285379 3602102 config.go:182] Loaded profile config "ha-299220": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:41:35.285401 3602102 status.go:174] checking status of ha-299220 ...
	I1227 09:41:35.286312 3602102 cli_runner.go:164] Run: docker container inspect ha-299220 --format={{.State.Status}}
	I1227 09:41:35.304472 3602102 status.go:371] ha-299220 host status = "Stopped" (err=<nil>)
	I1227 09:41:35.304498 3602102 status.go:384] host is not running, skipping remaining checks
	I1227 09:41:35.304506 3602102 status.go:176] ha-299220 status: &{Name:ha-299220 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:41:35.304550 3602102 status.go:174] checking status of ha-299220-m02 ...
	I1227 09:41:35.304851 3602102 cli_runner.go:164] Run: docker container inspect ha-299220-m02 --format={{.State.Status}}
	I1227 09:41:35.330271 3602102 status.go:371] ha-299220-m02 host status = "Stopped" (err=<nil>)
	I1227 09:41:35.330297 3602102 status.go:384] host is not running, skipping remaining checks
	I1227 09:41:35.330305 3602102 status.go:176] ha-299220-m02 status: &{Name:ha-299220-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:41:35.330324 3602102 status.go:174] checking status of ha-299220-m04 ...
	I1227 09:41:35.330629 3602102 cli_runner.go:164] Run: docker container inspect ha-299220-m04 --format={{.State.Status}}
	I1227 09:41:35.349180 3602102 status.go:371] ha-299220-m04 host status = "Stopped" (err=<nil>)
	I1227 09:41:35.349202 3602102 status.go:384] host is not running, skipping remaining checks
	I1227 09:41:35.349208 3602102 status.go:176] ha-299220-m04 status: &{Name:ha-299220-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1227 09:41:35.963525 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:42:28.761696 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.068727389s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 node add --control-plane --alsologtostderr -v 5: (44.994779876s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-299220 status --alsologtostderr -v 5: (1.099239836s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.163476792s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.16s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-064247 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1227 09:43:52.119630 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-064247 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (47.038583073s)
--- PASS: TestJSONOutput/start/Command (47.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-064247 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-064247 --output=json --user=testUser
E1227 09:44:19.806002 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-064247 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-064247 --output=json --user=testUser: (6.007479243s)
--- PASS: TestJSONOutput/stop/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-603492 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-603492 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.8035ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5f4c8f1-66d7-4061-a0f1-80b6d885ac56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-603492] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ba178e3-bd1a-4095-b687-95cf264d1409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22343"}}
	{"specversion":"1.0","id":"3b04ac90-23f4-4bf4-a2dd-9b614011f751","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed5d2979-045d-485c-bab9-98bb0a92b819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig"}}
	{"specversion":"1.0","id":"7e4b81b3-db53-423b-bd3d-815fa0a68d05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube"}}
	{"specversion":"1.0","id":"c482ea52-984c-42ba-b4d0-7cfde0803253","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d148ff33-eaf3-4ad1-88fa-79bcdddce228","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"594d4d55-ebab-40b1-a8fb-03302eeb7b04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-603492" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-603492
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-716119 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-716119 --network=: (32.514605399s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-716119" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-716119
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-716119: (2.208009902s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.75s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-966140 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-966140 --network=bridge: (28.630554667s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-966140" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-966140
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-966140: (2.124370625s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.78s)

                                                
                                    
x
+
TestKicExistingNetwork (28.99s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 09:45:37.380216 3533147 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:45:37.395184 3533147 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:45:37.395280 3533147 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 09:45:37.395298 3533147 cli_runner.go:164] Run: docker network inspect existing-network
W1227 09:45:37.411022 3533147 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 09:45:37.411053 3533147 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 09:45:37.411067 3533147 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 09:45:37.411167 3533147 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:45:37.428274 3533147 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d8712ba8a9f7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9e:f2:5a:61:6a:4e} reservation:<nil>}
I1227 09:45:37.428621 3533147 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40022e97e0}
I1227 09:45:37.429372 3533147 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 09:45:37.429458 3533147 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 09:45:37.491927 3533147 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-948871 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-948871 --network=existing-network: (26.712437631s)
helpers_test.go:176: Cleaning up "existing-network-948871" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-948871
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-948871: (2.135734809s)
I1227 09:46:06.357979 3533147 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (28.99s)

                                                
                                    
x
+
TestKicCustomSubnet (31.5s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-890105 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-890105 --subnet=192.168.60.0/24: (29.292445254s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-890105 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-890105" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-890105
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-890105: (2.183418813s)
--- PASS: TestKicCustomSubnet (31.50s)

                                                
                                    
x
+
TestKicStaticIP (31.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-746978 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-746978 --static-ip=192.168.200.200: (29.369781601s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-746978 ip
helpers_test.go:176: Cleaning up "static-ip-746978" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-746978
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-746978: (2.180109207s)
--- PASS: TestKicStaticIP (31.73s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (65.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-259482 --driver=docker  --container-runtime=containerd
E1227 09:47:28.763565 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-259482 --driver=docker  --container-runtime=containerd: (28.106125049s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-262075 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-262075 --driver=docker  --container-runtime=containerd: (31.095492868s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-259482
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-262075
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-262075" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-262075
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-262075: (2.130805021s)
helpers_test.go:176: Cleaning up "first-259482" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-259482
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-259482: (2.399583804s)
--- PASS: TestMinikubeProfile (65.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-210380 --memory=3072 --mount-string /tmp/TestMountStartserial1372964199/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-210380 --memory=3072 --mount-string /tmp/TestMountStartserial1372964199/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.699523968s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-210380 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-212357 --memory=3072 --mount-string /tmp/TestMountStartserial1372964199/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-212357 --memory=3072 --mount-string /tmp/TestMountStartserial1372964199/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.760395472s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-212357 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-210380 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-210380 --alsologtostderr -v=5: (1.71946344s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-212357 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-212357
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-212357: (1.282381765s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-212357
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-212357: (7.218574923s)
--- PASS: TestMountStart/serial/RestartStopped (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-212357 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-834595 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1227 09:48:51.809567 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:48:52.119975 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-834595 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m13.955878257s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-834595 -- rollout status deployment/busybox: (3.514394061s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-5v42q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-bqfj5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-5v42q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-bqfj5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-5v42q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-bqfj5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-5v42q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-5v42q -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-bqfj5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-834595 -- exec busybox-769dd8b7dd-bqfj5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-834595 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-834595 -v=5 --alsologtostderr: (27.847800351s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.57s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-834595 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp testdata/cp-test.txt multinode-834595:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3986982388/001/cp-test_multinode-834595.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595:/home/docker/cp-test.txt multinode-834595-m02:/home/docker/cp-test_multinode-834595_multinode-834595-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m02 "sudo cat /home/docker/cp-test_multinode-834595_multinode-834595-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595:/home/docker/cp-test.txt multinode-834595-m03:/home/docker/cp-test_multinode-834595_multinode-834595-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m03 "sudo cat /home/docker/cp-test_multinode-834595_multinode-834595-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp testdata/cp-test.txt multinode-834595-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3986982388/001/cp-test_multinode-834595-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595-m02:/home/docker/cp-test.txt multinode-834595:/home/docker/cp-test_multinode-834595-m02_multinode-834595.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595 "sudo cat /home/docker/cp-test_multinode-834595-m02_multinode-834595.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595-m02:/home/docker/cp-test.txt multinode-834595-m03:/home/docker/cp-test_multinode-834595-m02_multinode-834595-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m03 "sudo cat /home/docker/cp-test_multinode-834595-m02_multinode-834595-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp testdata/cp-test.txt multinode-834595-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3986982388/001/cp-test_multinode-834595-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595-m03:/home/docker/cp-test.txt multinode-834595:/home/docker/cp-test_multinode-834595-m03_multinode-834595.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595 "sudo cat /home/docker/cp-test_multinode-834595-m03_multinode-834595.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 cp multinode-834595-m03:/home/docker/cp-test.txt multinode-834595-m02:/home/docker/cp-test_multinode-834595-m03_multinode-834595-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 ssh -n multinode-834595-m02 "sudo cat /home/docker/cp-test_multinode-834595-m03_multinode-834595-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-834595 node stop m03: (1.303213123s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-834595 status: exit status 7 (547.500566ms)

                                                
                                                
-- stdout --
	multinode-834595
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-834595-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-834595-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-834595 status --alsologtostderr: exit status 7 (537.696174ms)

                                                
                                                
-- stdout --
	multinode-834595
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-834595-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-834595-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:50:49.391311 3655247 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:50:49.391427 3655247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:50:49.391438 3655247 out.go:374] Setting ErrFile to fd 2...
	I1227 09:50:49.391444 3655247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:50:49.391696 3655247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:50:49.391901 3655247 out.go:368] Setting JSON to false
	I1227 09:50:49.391930 3655247 mustload.go:66] Loading cluster: multinode-834595
	I1227 09:50:49.391995 3655247 notify.go:221] Checking for updates...
	I1227 09:50:49.392331 3655247 config.go:182] Loaded profile config "multinode-834595": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:50:49.392350 3655247 status.go:174] checking status of multinode-834595 ...
	I1227 09:50:49.392859 3655247 cli_runner.go:164] Run: docker container inspect multinode-834595 --format={{.State.Status}}
	I1227 09:50:49.413647 3655247 status.go:371] multinode-834595 host status = "Running" (err=<nil>)
	I1227 09:50:49.413672 3655247 host.go:66] Checking if "multinode-834595" exists ...
	I1227 09:50:49.413978 3655247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-834595
	I1227 09:50:49.439747 3655247 host.go:66] Checking if "multinode-834595" exists ...
	I1227 09:50:49.440087 3655247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:50:49.440144 3655247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-834595
	I1227 09:50:49.461290 3655247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36090 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/multinode-834595/id_rsa Username:docker}
	I1227 09:50:49.560541 3655247 ssh_runner.go:195] Run: systemctl --version
	I1227 09:50:49.567057 3655247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:50:49.579864 3655247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:50:49.652862 3655247 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 09:50:49.642340568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:50:49.653434 3655247 kubeconfig.go:125] found "multinode-834595" server: "https://192.168.67.2:8443"
	I1227 09:50:49.653480 3655247 api_server.go:166] Checking apiserver status ...
	I1227 09:50:49.653532 3655247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:50:49.666088 3655247 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	I1227 09:50:49.675114 3655247 api_server.go:192] apiserver freezer: "5:freezer:/docker/cf56ac4376b86baa6ab4e8d761ccb04bb015f39f14374a9138d3e9d1a47fa298/kubepods/burstable/podc9528a537be524fd2361d6749158aac4/012d8dc4e9bc42e37267ff908c8931a6fc9ad5fe1328e4e278f16cd3ae6b930e"
	I1227 09:50:49.675188 3655247 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cf56ac4376b86baa6ab4e8d761ccb04bb015f39f14374a9138d3e9d1a47fa298/kubepods/burstable/podc9528a537be524fd2361d6749158aac4/012d8dc4e9bc42e37267ff908c8931a6fc9ad5fe1328e4e278f16cd3ae6b930e/freezer.state
	I1227 09:50:49.683052 3655247 api_server.go:214] freezer state: "THAWED"
	I1227 09:50:49.683078 3655247 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 09:50:49.691860 3655247 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 09:50:49.691897 3655247 status.go:463] multinode-834595 apiserver status = Running (err=<nil>)
	I1227 09:50:49.691910 3655247 status.go:176] multinode-834595 status: &{Name:multinode-834595 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:50:49.691962 3655247 status.go:174] checking status of multinode-834595-m02 ...
	I1227 09:50:49.692291 3655247 cli_runner.go:164] Run: docker container inspect multinode-834595-m02 --format={{.State.Status}}
	I1227 09:50:49.711039 3655247 status.go:371] multinode-834595-m02 host status = "Running" (err=<nil>)
	I1227 09:50:49.711065 3655247 host.go:66] Checking if "multinode-834595-m02" exists ...
	I1227 09:50:49.711393 3655247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-834595-m02
	I1227 09:50:49.729812 3655247 host.go:66] Checking if "multinode-834595-m02" exists ...
	I1227 09:50:49.730134 3655247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:50:49.730180 3655247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-834595-m02
	I1227 09:50:49.748090 3655247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36095 SSHKeyPath:/home/jenkins/minikube-integration/22343-3531265/.minikube/machines/multinode-834595-m02/id_rsa Username:docker}
	I1227 09:50:49.844315 3655247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:50:49.857524 3655247 status.go:176] multinode-834595-m02 status: &{Name:multinode-834595-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:50:49.857561 3655247 status.go:174] checking status of multinode-834595-m03 ...
	I1227 09:50:49.857904 3655247 cli_runner.go:164] Run: docker container inspect multinode-834595-m03 --format={{.State.Status}}
	I1227 09:50:49.875839 3655247 status.go:371] multinode-834595-m03 host status = "Stopped" (err=<nil>)
	I1227 09:50:49.875864 3655247 status.go:384] host is not running, skipping remaining checks
	I1227 09:50:49.875871 3655247 status.go:176] multinode-834595-m03 status: &{Name:multinode-834595-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-834595 node start m03 -v=5 --alsologtostderr: (6.969105677s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-834595
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-834595
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-834595: (25.242231345s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-834595 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-834595 --wait=true -v=5 --alsologtostderr: (53.461412152s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-834595
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-834595 node delete m03: (5.067425435s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 stop
E1227 09:52:28.764358 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-834595 stop: (23.913505047s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-834595 status: exit status 7 (91.522075ms)

                                                
                                                
-- stdout --
	multinode-834595
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-834595-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-834595 status --alsologtostderr: exit status 7 (97.396723ms)

                                                
                                                
-- stdout --
	multinode-834595
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-834595-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:52:46.278051 3664028 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:52:46.278414 3664028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:46.278430 3664028 out.go:374] Setting ErrFile to fd 2...
	I1227 09:52:46.278437 3664028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:46.278745 3664028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:52:46.279004 3664028 out.go:368] Setting JSON to false
	I1227 09:52:46.279047 3664028 mustload.go:66] Loading cluster: multinode-834595
	I1227 09:52:46.279145 3664028 notify.go:221] Checking for updates...
	I1227 09:52:46.279497 3664028 config.go:182] Loaded profile config "multinode-834595": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:52:46.279518 3664028 status.go:174] checking status of multinode-834595 ...
	I1227 09:52:46.280079 3664028 cli_runner.go:164] Run: docker container inspect multinode-834595 --format={{.State.Status}}
	I1227 09:52:46.299264 3664028 status.go:371] multinode-834595 host status = "Stopped" (err=<nil>)
	I1227 09:52:46.299288 3664028 status.go:384] host is not running, skipping remaining checks
	I1227 09:52:46.299296 3664028 status.go:176] multinode-834595 status: &{Name:multinode-834595 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:52:46.299319 3664028 status.go:174] checking status of multinode-834595-m02 ...
	I1227 09:52:46.299624 3664028 cli_runner.go:164] Run: docker container inspect multinode-834595-m02 --format={{.State.Status}}
	I1227 09:52:46.326431 3664028 status.go:371] multinode-834595-m02 host status = "Stopped" (err=<nil>)
	I1227 09:52:46.326456 3664028 status.go:384] host is not running, skipping remaining checks
	I1227 09:52:46.326463 3664028 status.go:176] multinode-834595-m02 status: &{Name:multinode-834595-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-834595 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-834595 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.319736701s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-834595 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-834595
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-834595-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-834595-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.685304ms)

                                                
                                                
-- stdout --
	* [multinode-834595-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-834595-m02' is duplicated with machine name 'multinode-834595-m02' in profile 'multinode-834595'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-834595-m03 --driver=docker  --container-runtime=containerd
E1227 09:53:52.120843 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-834595-m03 --driver=docker  --container-runtime=containerd: (27.752800419s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-834595
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-834595: exit status 80 (345.89459ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-834595 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-834595-m03 already exists in multinode-834595-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-834595-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-834595-m03: (2.064632832s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.31s)

                                                
                                    
x
+
TestScheduledStopUnix (103.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-810037 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-810037 --memory=3072 --driver=docker  --container-runtime=containerd: (26.770626047s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-810037 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:54:35.623671 3673492 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:54:35.623837 3673492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:54:35.623870 3673492 out.go:374] Setting ErrFile to fd 2...
	I1227 09:54:35.623892 3673492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:54:35.624274 3673492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:54:35.624613 3673492 out.go:368] Setting JSON to false
	I1227 09:54:35.624771 3673492 mustload.go:66] Loading cluster: scheduled-stop-810037
	I1227 09:54:35.625427 3673492 config.go:182] Loaded profile config "scheduled-stop-810037": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:54:35.625554 3673492 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/scheduled-stop-810037/config.json ...
	I1227 09:54:35.625778 3673492 mustload.go:66] Loading cluster: scheduled-stop-810037
	I1227 09:54:35.625971 3673492 config.go:182] Loaded profile config "scheduled-stop-810037": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-810037 -n scheduled-stop-810037
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-810037 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:54:36.100798 3673582 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:54:36.100942 3673582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:54:36.100955 3673582 out.go:374] Setting ErrFile to fd 2...
	I1227 09:54:36.100972 3673582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:54:36.101266 3673582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:54:36.101569 3673582 out.go:368] Setting JSON to false
	I1227 09:54:36.101804 3673582 daemonize_unix.go:73] killing process 3673508 as it is an old scheduled stop
	I1227 09:54:36.103065 3673582 mustload.go:66] Loading cluster: scheduled-stop-810037
	I1227 09:54:36.103567 3673582 config.go:182] Loaded profile config "scheduled-stop-810037": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:54:36.103689 3673582 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/scheduled-stop-810037/config.json ...
	I1227 09:54:36.103907 3673582 mustload.go:66] Loading cluster: scheduled-stop-810037
	I1227 09:54:36.104070 3673582 config.go:182] Loaded profile config "scheduled-stop-810037": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 09:54:36.111168 3533147 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/scheduled-stop-810037/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-810037 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-810037 -n scheduled-stop-810037
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-810037
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-810037 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:55:02.102042 3674273 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:55:02.102241 3674273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:55:02.102269 3674273 out.go:374] Setting ErrFile to fd 2...
	I1227 09:55:02.102289 3674273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:55:02.102592 3674273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 09:55:02.102925 3674273 out.go:368] Setting JSON to false
	I1227 09:55:02.103118 3674273 mustload.go:66] Loading cluster: scheduled-stop-810037
	I1227 09:55:02.103532 3674273 config.go:182] Loaded profile config "scheduled-stop-810037": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 09:55:02.103662 3674273 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/scheduled-stop-810037/config.json ...
	I1227 09:55:02.103897 3674273 mustload.go:66] Loading cluster: scheduled-stop-810037
	I1227 09:55:02.104067 3674273 config.go:182] Loaded profile config "scheduled-stop-810037": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
E1227 09:55:15.167707 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-810037
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-810037: exit status 7 (70.900345ms)

                                                
                                                
-- stdout --
	scheduled-stop-810037
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-810037 -n scheduled-stop-810037
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-810037 -n scheduled-stop-810037: exit status 7 (65.375437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-810037" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-810037
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-810037: (4.997421321s)
--- PASS: TestScheduledStopUnix (103.45s)

                                                
                                    
x
+
TestInsufficientStorage (12.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-068217 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-068217 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.634635164s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b791ac9e-f284-4605-a1ce-2f9f63294b1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-068217] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"017340d8-0a00-428b-8785-4f6428727cde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22343"}}
	{"specversion":"1.0","id":"ab3e2584-0689-41b4-9621-5452d4e45fa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8e42f6e8-8e15-4d07-981e-b77f7f1254c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig"}}
	{"specversion":"1.0","id":"4c2b3b52-d55b-4c74-bcf6-5892244e53ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube"}}
	{"specversion":"1.0","id":"c51b3ba6-0a4e-4dfc-9ba2-97ccea461d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7c7010a4-bc22-47ea-9e57-bd4953b9709b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4f2ec6fa-2365-445d-938f-6921b1ee0c9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"03d656f8-956d-4749-be37-42fbe41395dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"da7648b7-0e7f-4649-8e21-77b5d5d04409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7fc070b-fcd7-4818-ba22-05a8ef3434e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"60f570c8-5256-471d-b159-07cd01c1bb18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-068217\" primary control-plane node in \"insufficient-storage-068217\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3660397c-91fa-4ac9-a02a-07f32bb573c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c02985fb-17b5-4081-93f4-c0c8cea8a5d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bccb1531-95bb-4e43-9a97-2bde8e23e26c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-068217 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-068217 --output=json --layout=cluster: exit status 7 (310.045077ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-068217","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-068217","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:56:02.177222 3676107 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-068217" does not appear in /home/jenkins/minikube-integration/22343-3531265/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-068217 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-068217 --output=json --layout=cluster: exit status 7 (308.419065ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-068217","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-068217","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:56:02.484854 3676173 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-068217" does not appear in /home/jenkins/minikube-integration/22343-3531265/kubeconfig
	E1227 09:56:02.496369 3676173 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/insufficient-storage-068217/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-068217" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-068217
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-068217: (1.935819985s)
--- PASS: TestInsufficientStorage (12.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3000037534 start -p running-upgrade-984753 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3000037534 start -p running-upgrade-984753 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.835970705s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-984753 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-984753 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.903071272s)
helpers_test.go:176: Cleaning up "running-upgrade-984753" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-984753
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-984753: (2.446861643s)
--- PASS: TestRunningBinaryUpgrade (70.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (334.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-416790 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-416790 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.453481567s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-416790 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-416790 --alsologtostderr: (1.397933201s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-416790 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-416790 status --format={{.Host}}: exit status 7 (106.368678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-416790 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-416790 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.01713129s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-416790 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-416790 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-416790 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (129.368239ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-416790] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-416790
	    minikube start -p kubernetes-upgrade-416790 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4167902 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-416790 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-416790 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-416790 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (12.525864205s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-416790" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-416790
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-416790: (2.130638252s)
--- PASS: TestKubernetesUpgrade (334.88s)

                                                
                                    
x
+
TestMissingContainerUpgrade (139.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1472034785 start -p missing-upgrade-618475 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1472034785 start -p missing-upgrade-618475 --memory=3072 --driver=docker  --container-runtime=containerd: (1m10.53789249s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-618475
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-618475: (3.445511049s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-618475
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-618475 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1227 09:57:28.761558 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-618475 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.361257473s)
helpers_test.go:176: Cleaning up "missing-upgrade-618475" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-618475
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-618475: (2.445822514s)
--- PASS: TestMissingContainerUpgrade (139.67s)

                                                
                                    
x
+
TestPause/serial/Start (50.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-810701 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-810701 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (50.836889569s)
--- PASS: TestPause/serial/Start (50.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-810701 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-810701 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.383655792s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.42s)

                                                
                                    
x
+
TestPause/serial/Pause (1.18s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-810701 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-810701 --alsologtostderr -v=5: (1.179601639s)
--- PASS: TestPause/serial/Pause (1.18s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-810701 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-810701 --output=json --layout=cluster: exit status 2 (462.56013ms)

                                                
                                                
-- stdout --
	{"Name":"pause-810701","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-810701","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-810701 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-810701 --alsologtostderr -v=5: (1.073963225s)
--- PASS: TestPause/serial/Unpause (1.07s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.27s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-810701 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-810701 --alsologtostderr -v=5: (1.267370406s)
--- PASS: TestPause/serial/PauseAgain (1.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.57s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-810701 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-810701 --alsologtostderr -v=5: (3.571544282s)
--- PASS: TestPause/serial/DeletePaused (3.57s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.82s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-810701
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-810701: exit status 1 (16.563022ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-810701: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (311.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4183301212 start -p stopped-upgrade-200467 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1227 09:58:52.120633 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4183301212 start -p stopped-upgrade-200467 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.653807079s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4183301212 -p stopped-upgrade-200467 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4183301212 -p stopped-upgrade-200467 stop: (1.254680076s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-200467 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1227 10:02:28.761774 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-200467 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.194339282s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (311.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-200467
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-200467: (1.248794386s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (68.03s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-587482 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1227 10:03:52.119939 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-587482 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (1m1.171963321s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-587482 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-587482
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-587482: (5.965744243s)
--- PASS: TestPreload/Start-NoPreload-PullImage (68.03s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (47.12s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-587482 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1227 10:05:31.810791 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-587482 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (46.859175191s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-587482 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (47.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301502 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-301502 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (89.655863ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-301502] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301502 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-301502 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.079579077s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-301502 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301502 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-301502 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.081407708s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-301502 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-301502 status -o json: exit status 2 (316.557221ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-301502","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-301502
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-301502: (1.981874195s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301502 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-301502 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.936620276s)
--- PASS: TestNoKubernetes/serial/Start (4.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22343-3531265/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-301502 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-301502 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.029128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-301502
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-301502: (1.291216602s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301502 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-301502 --driver=docker  --container-runtime=containerd: (6.543091239s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-301502 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-301502 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.989545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-557039 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-557039 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (198.362095ms)

                                                
                                                
-- stdout --
	* [false-557039] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 10:06:30.445848 3731619 out.go:360] Setting OutFile to fd 1 ...
	I1227 10:06:30.446059 3731619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:30.446085 3731619 out.go:374] Setting ErrFile to fd 2...
	I1227 10:06:30.446104 3731619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 10:06:30.446517 3731619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-3531265/.minikube/bin
	I1227 10:06:30.447129 3731619 out.go:368] Setting JSON to false
	I1227 10:06:30.448117 3731619 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":56943,"bootTime":1766773048,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1227 10:06:30.448238 3731619 start.go:143] virtualization:  
	I1227 10:06:30.451783 3731619 out.go:179] * [false-557039] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 10:06:30.459780 3731619 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 10:06:30.459840 3731619 notify.go:221] Checking for updates...
	I1227 10:06:30.463764 3731619 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 10:06:30.465970 3731619 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-3531265/kubeconfig
	I1227 10:06:30.468184 3731619 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-3531265/.minikube
	I1227 10:06:30.470577 3731619 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 10:06:30.472818 3731619 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 10:06:30.475844 3731619 config.go:182] Loaded profile config "force-systemd-env-194624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I1227 10:06:30.475962 3731619 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 10:06:30.506772 3731619 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 10:06:30.506899 3731619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 10:06:30.573307 3731619 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-27 10:06:30.563561331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 10:06:30.573415 3731619 docker.go:319] overlay module found
	I1227 10:06:30.576281 3731619 out.go:179] * Using the docker driver based on user configuration
	I1227 10:06:30.578885 3731619 start.go:309] selected driver: docker
	I1227 10:06:30.578909 3731619 start.go:928] validating driver "docker" against <nil>
	I1227 10:06:30.578924 3731619 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 10:06:30.582327 3731619 out.go:203] 
	W1227 10:06:30.585059 3731619 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1227 10:06:30.587811 3731619 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-557039 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-557039" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-557039

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-557039"

                                                
                                                
----------------------- debugLogs end: false-557039 [took: 3.237346205s] --------------------------------
helpers_test.go:176: Cleaning up "false-557039" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-557039
--- PASS: TestNetworkPlugins/group/false (3.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-429745 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1227 10:13:52.119554 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-429745 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m1.124526676s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-429745 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2400fae3-3c9b-4f02-b31b-44c5f150e953] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2400fae3-3c9b-4f02-b31b-44c5f150e953] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.00407612s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-429745 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-429745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-429745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126520036s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-429745 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-429745 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-429745 --alsologtostderr -v=3: (12.075353441s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-429745 -n old-k8s-version-429745
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-429745 -n old-k8s-version-429745: exit status 7 (77.746489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-429745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (27.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-429745 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-429745 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (27.333453366s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-429745 -n old-k8s-version-429745
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (27.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-6rtfv" [e2b9442a-7235-4391-ad79-b66d3369b66f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-6rtfv" [e2b9442a-7235-4391-ad79-b66d3369b66f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004399856s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-6rtfv" [e2b9442a-7235-4391-ad79-b66d3369b66f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.034553713s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-429745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-429745 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-429745 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-429745 -n old-k8s-version-429745
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-429745 -n old-k8s-version-429745: exit status 2 (356.091511ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-429745 -n old-k8s-version-429745
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-429745 -n old-k8s-version-429745: exit status 2 (364.396058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-429745 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-429745 -n old-k8s-version-429745
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-429745 -n old-k8s-version-429745
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-878202 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-878202 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.542892666s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-878202 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f8c54e2b-bcc1-4e84-ab63-c0be271d8dc8] Pending
helpers_test.go:353: "busybox" [f8c54e2b-bcc1-4e84-ab63-c0be271d8dc8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f8c54e2b-bcc1-4e84-ab63-c0be271d8dc8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00346334s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-878202 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-878202 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-878202 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-878202 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-878202 --alsologtostderr -v=3: (12.108898448s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-878202 -n no-preload-878202
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-878202 -n no-preload-878202: exit status 7 (75.367181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-878202 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-878202 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-878202 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.328063326s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-878202 -n no-preload-878202
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-m85qx" [d6a088cf-178a-43d4-ae50-e001672ab61e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003489075s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-m85qx" [d6a088cf-178a-43d4-ae50-e001672ab61e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003179633s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-878202 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-878202 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-878202 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-878202 -n no-preload-878202
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-878202 -n no-preload-878202: exit status 2 (341.922225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-878202 -n no-preload-878202
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-878202 -n no-preload-878202: exit status 2 (312.928256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-878202 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-878202 -n no-preload-878202
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-878202 -n no-preload-878202
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-161350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-161350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (46.929313222s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-161350 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [750c2b71-24fc-424c-a8b7-1053082244e3] Pending
helpers_test.go:353: "busybox" [750c2b71-24fc-424c-a8b7-1053082244e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [750c2b71-24fc-424c-a8b7-1053082244e3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003278514s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-161350 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-161350 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-161350 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-161350 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-161350 --alsologtostderr -v=3: (12.466040943s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-687001 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-687001 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (50.819043875s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-161350 -n embed-certs-161350
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-161350 -n embed-certs-161350: exit status 7 (129.521111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-161350 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (28.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-161350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1227 10:18:52.123098 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:56.534751 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:56.540100 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:56.550356 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:56.570615 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:56.610884 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:56.691257 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:56.852144 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:57.172266 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:57.813412 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:18:59.093556 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:19:01.654312 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:19:06.774778 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-161350 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (27.720330502s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-161350 -n embed-certs-161350
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (28.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wkttn" [3afa23df-f0f3-4d45-a3fc-68f410262b7a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003049034s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-wkttn" [3afa23df-f0f3-4d45-a3fc-68f410262b7a] Running
E1227 10:19:17.015080 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003306209s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-161350 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-161350 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-161350 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-161350 -n embed-certs-161350
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-161350 -n embed-certs-161350: exit status 2 (329.830899ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-161350 -n embed-certs-161350
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-161350 -n embed-certs-161350: exit status 2 (347.550464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-161350 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-161350 -n embed-certs-161350
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-161350 -n embed-certs-161350
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-347116 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-347116 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (33.856256697s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-687001 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a3f2dc84-cab4-4b4c-976f-fd5c69a520c4] Pending
helpers_test.go:353: "busybox" [a3f2dc84-cab4-4b4c-976f-fd5c69a520c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a3f2dc84-cab4-4b4c-976f-fd5c69a520c4] Running
E1227 10:19:37.495501 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004972862s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-687001 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-687001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-687001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.24105163s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-687001 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-687001 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-687001 --alsologtostderr -v=3: (12.308620136s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001: exit status 7 (80.646379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-687001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-687001 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-687001 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (1m0.392188937s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-347116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-347116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.914219504s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-347116 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-347116 --alsologtostderr -v=3: (1.558501761s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-347116 -n newest-cni-347116
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-347116 -n newest-cni-347116: exit status 7 (189.618998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-347116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-347116 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E1227 10:20:18.455735 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-347116 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (17.773907099s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-347116 -n newest-cni-347116
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-347116 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-347116 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-347116 -n newest-cni-347116
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-347116 -n newest-cni-347116: exit status 2 (345.665279ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-347116 -n newest-cni-347116
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-347116 -n newest-cni-347116: exit status 2 (324.528705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-347116 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-347116 -n newest-cni-347116
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-347116 -n newest-cni-347116
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.25s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-046622 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-046622 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (4.053065357s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-046622" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-046622
--- PASS: TestPreload/PreloadSrc/gcs (4.25s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.65s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-017098 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-017098 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (4.448376012s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-017098" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-017098
--- PASS: TestPreload/PreloadSrc/github (4.65s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.46s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-334858 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-334858" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-334858
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (47.136739219s)
--- PASS: TestNetworkPlugins/group/auto/Start (47.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-pzzp9" [d4697ee1-5e85-43fc-9430-7b17b0ca8071] Running
E1227 10:20:58.791110 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:20:58.796377 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:20:58.806619 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:20:58.826904 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:20:58.867208 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:20:58.947502 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:20:59.107903 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:20:59.428693 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:21:00.082927 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.014272348s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-pzzp9" [d4697ee1-5e85-43fc-9430-7b17b0ca8071] Running
E1227 10:21:01.363564 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:21:03.923720 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003198088s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-687001 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-687001 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-687001 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001: exit status 2 (579.547922ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001: exit status 2 (531.79562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-687001 --alsologtostderr -v=1
E1227 10:21:09.044123 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-687001 --alsologtostderr -v=1: (1.15596815s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-687001 -n default-k8s-diff-port-687001
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.46s)
E1227 10:25:58.791942 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1227 10:21:19.284289 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (48.191230569s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-557039 "pgrep -a kubelet"
I1227 10:21:25.158836 3533147 config.go:182] Loaded profile config "auto-557039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-557039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-frptj" [4c1c1140-56dc-4007-97d7-c5cc42338d9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-frptj" [4c1c1140-56dc-4007-97d7-c5cc42338d9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004759257s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-557039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.840661479s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-t7cm6" [28690f26-886b-4054-a50d-ba8a4667310e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004209569s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-557039 "pgrep -a kubelet"
I1227 10:22:08.835393 3533147 config.go:182] Loaded profile config "kindnet-557039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-557039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-9666p" [ab80b96f-8b21-4295-82ee-7828816fa004] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 10:22:11.811148 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/addons-888652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-9666p" [ab80b96f-8b21-4295-82ee-7828816fa004] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004740922s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-557039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.716226981s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-mfpjd" [3535c6fa-66cd-4087-a558-1b81eab5a2c9] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-mfpjd" [3535c6fa-66cd-4087-a558-1b81eab5a2c9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00474233s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-557039 "pgrep -a kubelet"
I1227 10:23:04.499269 3533147 config.go:182] Loaded profile config "calico-557039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-557039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-76xgb" [c462f375-b0c3-434c-ac22-8605faa1cf1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-76xgb" [c462f375-b0c3-434c-ac22-8605faa1cf1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004156962s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-557039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1227 10:23:42.646267 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m19.129105736s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-557039 "pgrep -a kubelet"
I1227 10:23:44.098301 3533147 config.go:182] Loaded profile config "custom-flannel-557039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-557039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-wm77t" [1f73026a-1bb9-4598-a4c3-1f72158540a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-wm77t" [1f73026a-1bb9-4598-a4c3-1f72158540a5] Running
E1227 10:23:52.119809 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/functional-237950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003983206s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-557039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1227 10:24:24.216198 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/old-k8s-version-429745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:29.199311 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:29.204646 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:29.214912 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:29.235257 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:29.275651 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:29.355909 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:29.516320 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:29.836601 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:30.477223 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:31.758396 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:34.318896 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:39.439470 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:24:49.680164 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (49.778452753s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-557039 "pgrep -a kubelet"
I1227 10:24:59.347201 3533147 config.go:182] Loaded profile config "enable-default-cni-557039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-557039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-t5qkl" [91449bc4-76f6-48e8-8625-39674fc0f0fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-t5qkl" [91449bc4-76f6-48e8-8625-39674fc0f0fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003256253s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-557039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-7v62c" [66652dff-969e-42ab-8191-fb3c672554ce] Running
E1227 10:25:10.161398 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/default-k8s-diff-port-687001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008191106s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-557039 "pgrep -a kubelet"
I1227 10:25:15.996278 3533147 config.go:182] Loaded profile config "flannel-557039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-557039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-6sz49" [1fd0fa2e-c6a3-41dd-8370-ca2db356f53f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-6sz49" [1fd0fa2e-c6a3-41dd-8370-ca2db356f53f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.01114467s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-557039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-557039 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (48.204850944s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-557039 "pgrep -a kubelet"
I1227 10:26:19.777553 3533147 config.go:182] Loaded profile config "bridge-557039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-557039 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lmzx2" [df8271ff-d86f-4aaa-a1cf-9ebbe42453a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-lmzx2" [df8271ff-d86f-4aaa-a1cf-9ebbe42453a2] Running
E1227 10:26:25.463198 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:25.468544 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:25.478970 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:25.499255 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:25.539623 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:25.620010 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:25.780429 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:26.101456 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:26.487019 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/no-preload-878202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:26.742509 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:28.022791 3533147 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-3531265/.minikube/profiles/auto-557039/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00420481s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-557039 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-557039 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/337)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.64s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-697871 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-697871" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-697871
--- SKIP: TestDownloadOnlyKic (0.64s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-569457" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-569457
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-557039 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-557039" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-557039

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-557039"

                                                
                                                
----------------------- debugLogs end: kubenet-557039 [took: 3.369506983s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-557039" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-557039
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-557039 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-557039" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-557039

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-557039" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557039"

                                                
                                                
----------------------- debugLogs end: cilium-557039 [took: 3.661238222s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-557039" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-557039
--- SKIP: TestNetworkPlugins/group/cilium (3.81s)

                                                
                                    
Copied to clipboard