Test Report: Docker_Linux_containerd_arm64 22402

                    
                      783b0304fb34eb1d9554b20c324bb66df0781ba8:2026-01-11:43196
                    
                

Test fail (2/337)

Order failed test Duration
52 TestForceSystemdFlag 505.26
53 TestForceSystemdEnv 506.41
x
+
TestForceSystemdFlag (505.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-610060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0111 08:12:40.285571 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:13:00.476718 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-610060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m21.490172261s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-610060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-610060" primary control-plane node in "force-systemd-flag-610060" cluster
	* Pulling base image v0.0.48-1768032998-22402 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:11:45.966483 3329885 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:11:45.966703 3329885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:11:45.966732 3329885 out.go:374] Setting ErrFile to fd 2...
	I0111 08:11:45.966751 3329885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:11:45.967177 3329885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 08:11:45.968023 3329885 out.go:368] Setting JSON to false
	I0111 08:11:45.968908 3329885 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":50057,"bootTime":1768069049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 08:11:45.968983 3329885 start.go:143] virtualization:  
	I0111 08:11:45.972677 3329885 out.go:179] * [force-systemd-flag-610060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:11:45.977345 3329885 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:11:45.977453 3329885 notify.go:221] Checking for updates...
	I0111 08:11:45.984099 3329885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:11:45.987358 3329885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 08:11:45.990611 3329885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 08:11:45.993730 3329885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:11:45.996854 3329885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:11:46.002916 3329885 config.go:182] Loaded profile config "force-systemd-env-305397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:11:46.003074 3329885 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:11:46.034142 3329885 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:11:46.034275 3329885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:11:46.125120 3329885 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:11:46.113366797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:11:46.125226 3329885 docker.go:319] overlay module found
	I0111 08:11:46.128633 3329885 out.go:179] * Using the docker driver based on user configuration
	I0111 08:11:46.131564 3329885 start.go:309] selected driver: docker
	I0111 08:11:46.131591 3329885 start.go:928] validating driver "docker" against <nil>
	I0111 08:11:46.131605 3329885 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:11:46.132458 3329885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:11:46.188583 3329885 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:11:46.179395708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:11:46.188739 3329885 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:11:46.188960 3329885 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:11:46.191962 3329885 out.go:179] * Using Docker driver with root privileges
	I0111 08:11:46.194890 3329885 cni.go:84] Creating CNI manager for ""
	I0111 08:11:46.194959 3329885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:11:46.194975 3329885 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:11:46.195053 3329885 start.go:353] cluster config:
	{Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I0111 08:11:46.200121 3329885 out.go:179] * Starting "force-systemd-flag-610060" primary control-plane node in "force-systemd-flag-610060" cluster
	I0111 08:11:46.203055 3329885 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0111 08:11:46.206054 3329885 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:11:46.208898 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:11:46.208958 3329885 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I0111 08:11:46.208971 3329885 cache.go:65] Caching tarball of preloaded images
	I0111 08:11:46.208985 3329885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:11:46.209059 3329885 preload.go:251] Found /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0111 08:11:46.209070 3329885 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I0111 08:11:46.209177 3329885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json ...
	I0111 08:11:46.209198 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json: {Name:mke00c980f6aa6c98163914c28e2b3a0179313f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:46.228792 3329885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:11:46.228814 3329885 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:11:46.228829 3329885 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:11:46.228857 3329885 start.go:360] acquireMachinesLock for force-systemd-flag-610060: {Name:mk7b285d446b288e2ef1025bb5bf30ad660e990b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:11:46.228963 3329885 start.go:364] duration metric: took 84.946µs to acquireMachinesLock for "force-systemd-flag-610060"
	I0111 08:11:46.228995 3329885 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0111 08:11:46.229072 3329885 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:11:46.232524 3329885 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:11:46.232749 3329885 start.go:159] libmachine.API.Create for "force-systemd-flag-610060" (driver="docker")
	I0111 08:11:46.232785 3329885 client.go:173] LocalClient.Create starting
	I0111 08:11:46.232857 3329885 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem
	I0111 08:11:46.232894 3329885 main.go:144] libmachine: Decoding PEM data...
	I0111 08:11:46.232913 3329885 main.go:144] libmachine: Parsing certificate...
	I0111 08:11:46.232970 3329885 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem
	I0111 08:11:46.232992 3329885 main.go:144] libmachine: Decoding PEM data...
	I0111 08:11:46.233007 3329885 main.go:144] libmachine: Parsing certificate...
	I0111 08:11:46.233367 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:11:46.250050 3329885 cli_runner.go:211] docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:11:46.250150 3329885 network_create.go:284] running [docker network inspect force-systemd-flag-610060] to gather additional debugging logs...
	I0111 08:11:46.250170 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060
	W0111 08:11:46.264851 3329885 cli_runner.go:211] docker network inspect force-systemd-flag-610060 returned with exit code 1
	I0111 08:11:46.264883 3329885 network_create.go:287] error running [docker network inspect force-systemd-flag-610060]: docker network inspect force-systemd-flag-610060: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-610060 not found
	I0111 08:11:46.264896 3329885 network_create.go:289] output of [docker network inspect force-systemd-flag-610060]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-610060 not found
	
	** /stderr **
	I0111 08:11:46.265009 3329885 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:11:46.281585 3329885 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d6a2604bb10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:cd:63:f9:b2:f8} reservation:<nil>}
	I0111 08:11:46.281997 3329885 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cec031213447 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:71:bf:56:ac:cb} reservation:<nil>}
	I0111 08:11:46.282212 3329885 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0e2d137ca1da IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:68:81:9e:35:63} reservation:<nil>}
	I0111 08:11:46.282485 3329885 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9455289443b5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:d1:66:6a:84:dd} reservation:<nil>}
	I0111 08:11:46.282935 3329885 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a129c0}
	I0111 08:11:46.282958 3329885 network_create.go:124] attempt to create docker network force-systemd-flag-610060 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 08:11:46.283014 3329885 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-610060 force-systemd-flag-610060
	I0111 08:11:46.338524 3329885 network_create.go:108] docker network force-systemd-flag-610060 192.168.85.0/24 created
	I0111 08:11:46.338555 3329885 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-610060" container
	I0111 08:11:46.338639 3329885 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:11:46.354768 3329885 cli_runner.go:164] Run: docker volume create force-systemd-flag-610060 --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:11:46.372694 3329885 oci.go:103] Successfully created a docker volume force-systemd-flag-610060
	I0111 08:11:46.372798 3329885 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-610060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --entrypoint /usr/bin/test -v force-systemd-flag-610060:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:11:46.921885 3329885 oci.go:107] Successfully prepared a docker volume force-systemd-flag-610060
	I0111 08:11:46.921940 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:11:46.921951 3329885 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:11:46.922032 3329885 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-610060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:11:50.731187 3329885 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-610060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.809106983s)
	I0111 08:11:50.731222 3329885 kic.go:203] duration metric: took 3.80926748s to extract preloaded images to volume ...
	W0111 08:11:50.731361 3329885 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:11:50.731477 3329885 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:11:50.797692 3329885 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-610060 --name force-systemd-flag-610060 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-610060 --network force-systemd-flag-610060 --ip 192.168.85.2 --volume force-systemd-flag-610060:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:11:51.110888 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Running}}
	I0111 08:11:51.136837 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
	I0111 08:11:51.165956 3329885 cli_runner.go:164] Run: docker exec force-systemd-flag-610060 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:11:51.215991 3329885 oci.go:144] the created container "force-systemd-flag-610060" has a running status.
	I0111 08:11:51.216037 3329885 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa...
	I0111 08:11:51.516534 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:11:51.516633 3329885 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:11:51.539007 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
	I0111 08:11:51.567105 3329885 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:11:51.567123 3329885 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-610060 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:11:51.645455 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
	I0111 08:11:51.680580 3329885 machine.go:94] provisionDockerMachine start ...
	I0111 08:11:51.680675 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:51.710716 3329885 main.go:144] libmachine: Using SSH client type: native
	I0111 08:11:51.711064 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0111 08:11:51.711073 3329885 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:11:51.711854 3329885 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 08:11:54.859728 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-610060
	
	I0111 08:11:54.859755 3329885 ubuntu.go:182] provisioning hostname "force-systemd-flag-610060"
	I0111 08:11:54.859827 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:54.876832 3329885 main.go:144] libmachine: Using SSH client type: native
	I0111 08:11:54.877152 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0111 08:11:54.877172 3329885 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-610060 && echo "force-systemd-flag-610060" | sudo tee /etc/hostname
	I0111 08:11:55.043732 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-610060
	
	I0111 08:11:55.043827 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.066688 3329885 main.go:144] libmachine: Using SSH client type: native
	I0111 08:11:55.067032 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0111 08:11:55.067054 3329885 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-610060' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-610060/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-610060' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:11:55.224621 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:11:55.224644 3329885 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-3122619/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-3122619/.minikube}
	I0111 08:11:55.224663 3329885 ubuntu.go:190] setting up certificates
	I0111 08:11:55.224672 3329885 provision.go:84] configureAuth start
	I0111 08:11:55.224733 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
	I0111 08:11:55.242257 3329885 provision.go:143] copyHostCerts
	I0111 08:11:55.242309 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
	I0111 08:11:55.242342 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem, removing ...
	I0111 08:11:55.242359 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
	I0111 08:11:55.242440 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem (1078 bytes)
	I0111 08:11:55.242520 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
	I0111 08:11:55.242542 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem, removing ...
	I0111 08:11:55.242556 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
	I0111 08:11:55.242586 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem (1123 bytes)
	I0111 08:11:55.242658 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
	I0111 08:11:55.242679 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem, removing ...
	I0111 08:11:55.242686 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
	I0111 08:11:55.242713 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem (1675 bytes)
	I0111 08:11:55.242763 3329885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-610060 san=[127.0.0.1 192.168.85.2 force-systemd-flag-610060 localhost minikube]
	I0111 08:11:55.423643 3329885 provision.go:177] copyRemoteCerts
	I0111 08:11:55.423714 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:11:55.423760 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.442089 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:55.544114 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:11:55.544174 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:11:55.562451 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:11:55.562560 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:11:55.579624 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:11:55.579720 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:11:55.597215 3329885 provision.go:87] duration metric: took 372.519842ms to configureAuth
	I0111 08:11:55.597285 3329885 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:11:55.597493 3329885 config.go:182] Loaded profile config "force-systemd-flag-610060": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:11:55.597509 3329885 machine.go:97] duration metric: took 3.916909939s to provisionDockerMachine
	I0111 08:11:55.597517 3329885 client.go:176] duration metric: took 9.364722727s to LocalClient.Create
	I0111 08:11:55.597537 3329885 start.go:167] duration metric: took 9.364789212s to libmachine.API.Create "force-systemd-flag-610060"
	I0111 08:11:55.597550 3329885 start.go:293] postStartSetup for "force-systemd-flag-610060" (driver="docker")
	I0111 08:11:55.597559 3329885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:11:55.597617 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:11:55.597673 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.614880 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:55.720221 3329885 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:11:55.723472 3329885 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:11:55.723501 3329885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:11:55.723512 3329885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/addons for local assets ...
	I0111 08:11:55.723589 3329885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/files for local assets ...
	I0111 08:11:55.723683 3329885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> 31244842.pem in /etc/ssl/certs
	I0111 08:11:55.723702 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> /etc/ssl/certs/31244842.pem
	I0111 08:11:55.723821 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:11:55.731084 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /etc/ssl/certs/31244842.pem (1708 bytes)
	I0111 08:11:55.748115 3329885 start.go:296] duration metric: took 150.541395ms for postStartSetup
	I0111 08:11:55.748506 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
	I0111 08:11:55.765507 3329885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json ...
	I0111 08:11:55.765856 3329885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:11:55.765912 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.782246 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:55.885136 3329885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:11:55.889854 3329885 start.go:128] duration metric: took 9.66076858s to createHost
	I0111 08:11:55.889877 3329885 start.go:83] releasing machines lock for "force-systemd-flag-610060", held for 9.660899777s
	I0111 08:11:55.889946 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
	I0111 08:11:55.909521 3329885 ssh_runner.go:195] Run: cat /version.json
	I0111 08:11:55.909572 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.909672 3329885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:11:55.909730 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.929746 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:55.940401 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:56.137299 3329885 ssh_runner.go:195] Run: systemctl --version
	I0111 08:11:56.144072 3329885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:11:56.149649 3329885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:11:56.149741 3329885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:11:56.178398 3329885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:11:56.178426 3329885 start.go:496] detecting cgroup driver to use...
	I0111 08:11:56.178440 3329885 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:11:56.178497 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0111 08:11:56.194017 3329885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0111 08:11:56.207355 3329885 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:11:56.207437 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:11:56.225243 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:11:56.244325 3329885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:11:56.364184 3329885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:11:56.477116 3329885 docker.go:234] disabling docker service ...
	I0111 08:11:56.477205 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:11:56.497704 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:11:56.510638 3329885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:11:56.657297 3329885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:11:56.780195 3329885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:11:56.793449 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:11:56.808590 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0111 08:11:56.818025 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0111 08:11:56.826953 3329885 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0111 08:11:56.827070 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0111 08:11:56.836326 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:11:56.845203 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0111 08:11:56.854138 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:11:56.862604 3329885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:11:56.870988 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0111 08:11:56.879524 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0111 08:11:56.888444 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0111 08:11:56.897577 3329885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:11:56.905221 3329885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:11:56.912587 3329885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:57.029083 3329885 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0111 08:11:57.166782 3329885 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I0111 08:11:57.166926 3329885 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0111 08:11:57.170932 3329885 start.go:574] Will wait 60s for crictl version
	I0111 08:11:57.171048 3329885 ssh_runner.go:195] Run: which crictl
	I0111 08:11:57.174867 3329885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:11:57.199898 3329885 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I0111 08:11:57.199981 3329885 ssh_runner.go:195] Run: containerd --version
	I0111 08:11:57.219306 3329885 ssh_runner.go:195] Run: containerd --version
	I0111 08:11:57.244995 3329885 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I0111 08:11:57.248122 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:11:57.264038 3329885 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 08:11:57.267824 3329885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:11:57.277801 3329885 kubeadm.go:884] updating cluster {Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:11:57.278152 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:11:57.278240 3329885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:11:57.315254 3329885 containerd.go:635] all images are preloaded for containerd runtime.
	I0111 08:11:57.315275 3329885 containerd.go:542] Images already preloaded, skipping extraction
	I0111 08:11:57.315336 3329885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:11:57.349393 3329885 containerd.go:635] all images are preloaded for containerd runtime.
	I0111 08:11:57.349415 3329885 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:11:57.349423 3329885 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I0111 08:11:57.349517 3329885 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-610060 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:11:57.349582 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I0111 08:11:57.382639 3329885 cni.go:84] Creating CNI manager for ""
	I0111 08:11:57.382663 3329885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:11:57.382685 3329885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:11:57.382708 3329885 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-610060 NodeName:force-systemd-flag-610060 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:11:57.382828 3329885 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-610060"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:11:57.382905 3329885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:11:57.390559 3329885 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:11:57.390630 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:11:57.398214 3329885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0111 08:11:57.410850 3329885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:11:57.424327 3329885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I0111 08:11:57.436984 3329885 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:11:57.440400 3329885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:11:57.450402 3329885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:57.573600 3329885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:11:57.590952 3329885 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060 for IP: 192.168.85.2
	I0111 08:11:57.590987 3329885 certs.go:195] generating shared ca certs ...
	I0111 08:11:57.591004 3329885 certs.go:227] acquiring lock for ca certs: {Name:mk4f88e5992499f3a8089baf463e3ba7f81a52c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:57.591198 3329885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key
	I0111 08:11:57.591246 3329885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key
	I0111 08:11:57.591260 3329885 certs.go:257] generating profile certs ...
	I0111 08:11:57.591327 3329885 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key
	I0111 08:11:57.591359 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt with IP's: []
	I0111 08:11:58.180659 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt ...
	I0111 08:11:58.180706 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt: {Name:mk9bd0b635b7181a879895561a6d686f28614647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.180963 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key ...
	I0111 08:11:58.180982 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key: {Name:mkfe2120f2e6288c7ad6ca3b08d9dccc6b76b069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.181090 3329885 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120
	I0111 08:11:58.181117 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 08:11:58.711099 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 ...
	I0111 08:11:58.711132 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120: {Name:mke960834fa45cb1bccf7b579ab4a287f777445c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.711369 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120 ...
	I0111 08:11:58.711385 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120: {Name:mkf73783f074957828edc09fa9ea5a4548656c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.711473 3329885 certs.go:382] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt
	I0111 08:11:58.711554 3329885 certs.go:386] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key
	I0111 08:11:58.711646 3329885 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key
	I0111 08:11:58.711665 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt with IP's: []
	I0111 08:11:58.912664 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt ...
	I0111 08:11:58.912696 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt: {Name:mk922fc5010cb627196768e155857c21dcb7d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.912882 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key ...
	I0111 08:11:58.912895 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key: {Name:mk6bffbe07eace11218581bafe3df67bbad9745d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.912983 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:11:58.913003 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:11:58.913015 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:11:58.913030 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:11:58.913042 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:11:58.913059 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:11:58.913074 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:11:58.913089 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:11:58.913153 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem (1338 bytes)
	W0111 08:11:58.913196 3329885 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484_empty.pem, impossibly tiny 0 bytes
	I0111 08:11:58.913209 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 08:11:58.913237 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:11:58.913264 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:11:58.913300 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem (1675 bytes)
	I0111 08:11:58.913351 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem (1708 bytes)
	I0111 08:11:58.913383 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:58.913398 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem -> /usr/share/ca-certificates/3124484.pem
	I0111 08:11:58.913409 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> /usr/share/ca-certificates/31244842.pem
	I0111 08:11:58.913910 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:11:58.934507 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 08:11:58.955410 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:11:58.973948 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:11:58.992574 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:11:59.013013 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:11:59.031246 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:11:59.051982 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:11:59.070932 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:11:59.088240 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem --> /usr/share/ca-certificates/3124484.pem (1338 bytes)
	I0111 08:11:59.106023 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /usr/share/ca-certificates/31244842.pem (1708 bytes)
	I0111 08:11:59.124702 3329885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:11:59.137901 3329885 ssh_runner.go:195] Run: openssl version
	I0111 08:11:59.144226 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/31244842.pem
	I0111 08:11:59.152606 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/31244842.pem /etc/ssl/certs/31244842.pem
	I0111 08:11:59.160337 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31244842.pem
	I0111 08:11:59.164263 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:32 /usr/share/ca-certificates/31244842.pem
	I0111 08:11:59.164416 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31244842.pem
	I0111 08:11:59.206887 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:11:59.214713 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/31244842.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:11:59.222480 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:59.230140 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:11:59.238568 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:59.242380 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:26 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:59.242451 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:59.283430 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:11:59.291242 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:11:59.299017 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3124484.pem
	I0111 08:11:59.306462 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3124484.pem /etc/ssl/certs/3124484.pem
	I0111 08:11:59.314159 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3124484.pem
	I0111 08:11:59.318199 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:32 /usr/share/ca-certificates/3124484.pem
	I0111 08:11:59.318267 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3124484.pem
	I0111 08:11:59.364365 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:11:59.372018 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3124484.pem /etc/ssl/certs/51391683.0
	I0111 08:11:59.379565 3329885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:11:59.383252 3329885 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:11:59.383305 3329885 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:11:59.383396 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0111 08:11:59.383462 3329885 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:11:59.409520 3329885 cri.go:96] found id: ""
	I0111 08:11:59.409625 3329885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:11:59.417554 3329885 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:11:59.425266 3329885 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:11:59.425333 3329885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:11:59.433014 3329885 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:11:59.433033 3329885 kubeadm.go:158] found existing configuration files:
	
	I0111 08:11:59.433106 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:11:59.441062 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:11:59.441144 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:11:59.448415 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:11:59.456088 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:11:59.456158 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:11:59.463696 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:11:59.471473 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:11:59.471550 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:11:59.479035 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:11:59.486818 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:11:59.486907 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:11:59.494369 3329885 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:11:59.531469 3329885 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:11:59.531535 3329885 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:11:59.616591 3329885 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:11:59.616667 3329885 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:11:59.616707 3329885 kubeadm.go:319] OS: Linux
	I0111 08:11:59.616757 3329885 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:11:59.616809 3329885 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:11:59.616860 3329885 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:11:59.616913 3329885 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:11:59.616966 3329885 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:11:59.617026 3329885 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:11:59.617076 3329885 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:11:59.617128 3329885 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:11:59.617177 3329885 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:11:59.680028 3329885 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:11:59.680143 3329885 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:11:59.680238 3329885 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:11:59.688820 3329885 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:11:59.695891 3329885 out.go:252]   - Generating certificates and keys ...
	I0111 08:11:59.696068 3329885 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:11:59.696180 3329885 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:11:59.888200 3329885 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:12:00.676065 3329885 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:12:00.930267 3329885 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:12:01.030505 3329885 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:12:01.283889 3329885 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:12:01.284218 3329885 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:12:01.834107 3329885 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:12:01.834425 3329885 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:12:01.879677 3329885 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:12:02.051499 3329885 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:12:02.379706 3329885 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:12:02.379938 3329885 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:12:02.595602 3329885 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:12:03.030736 3329885 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:12:03.387448 3329885 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:12:03.538058 3329885 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:12:04.600361 3329885 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:12:04.601328 3329885 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:12:04.604433 3329885 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:12:04.608234 3329885 out.go:252]   - Booting up control plane ...
	I0111 08:12:04.608345 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:12:04.608424 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:12:04.609166 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:12:04.626788 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:12:04.626897 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:12:04.634879 3329885 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:12:04.635665 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:12:04.635945 3329885 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:12:04.773900 3329885 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:12:04.774028 3329885 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:16:04.774066 3329885 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001234218s
	I0111 08:16:04.774104 3329885 kubeadm.go:319] 
	I0111 08:16:04.774255 3329885 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:16:04.774569 3329885 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:16:04.774865 3329885 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:16:04.774874 3329885 kubeadm.go:319] 
	I0111 08:16:04.775174 3329885 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:16:04.775233 3329885 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:16:04.775288 3329885 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:16:04.775294 3329885 kubeadm.go:319] 
	I0111 08:16:04.781053 3329885 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:16:04.781556 3329885 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:16:04.781670 3329885 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:16:04.781954 3329885 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:16:04.781976 3329885 kubeadm.go:319] 
	I0111 08:16:04.782060 3329885 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0111 08:16:04.782196 3329885 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001234218s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 08:16:04.782323 3329885 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0111 08:16:05.213214 3329885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:16:05.227519 3329885 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:16:05.227584 3329885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:16:05.237077 3329885 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:16:05.237098 3329885 kubeadm.go:158] found existing configuration files:
	
	I0111 08:16:05.237153 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:16:05.245177 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:16:05.245249 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:16:05.253388 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:16:05.262192 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:16:05.262276 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:16:05.270385 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:16:05.278493 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:16:05.278558 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:16:05.286603 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:16:05.294682 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:16:05.294754 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:16:05.302943 3329885 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:16:05.342985 3329885 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:16:05.343160 3329885 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:16:05.415732 3329885 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:16:05.415812 3329885 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:16:05.415853 3329885 kubeadm.go:319] OS: Linux
	I0111 08:16:05.415903 3329885 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:16:05.415955 3329885 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:16:05.416005 3329885 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:16:05.416055 3329885 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:16:05.416107 3329885 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:16:05.416158 3329885 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:16:05.416207 3329885 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:16:05.416260 3329885 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:16:05.416342 3329885 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:16:05.488509 3329885 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:16:05.488621 3329885 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:16:05.488712 3329885 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:16:05.496708 3329885 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:16:05.500018 3329885 out.go:252]   - Generating certificates and keys ...
	I0111 08:16:05.500132 3329885 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:16:05.500212 3329885 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:16:05.500346 3329885 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 08:16:05.500426 3329885 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 08:16:05.500509 3329885 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 08:16:05.500579 3329885 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 08:16:05.500657 3329885 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 08:16:05.500734 3329885 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 08:16:05.500836 3329885 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 08:16:05.500927 3329885 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 08:16:05.500980 3329885 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 08:16:05.501053 3329885 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:16:05.594753 3329885 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:16:05.890561 3329885 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:16:06.331295 3329885 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:16:06.574863 3329885 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:16:06.785086 3329885 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:16:06.785655 3329885 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:16:06.788115 3329885 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:16:06.790955 3329885 out.go:252]   - Booting up control plane ...
	I0111 08:16:06.791057 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:16:06.791134 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:16:06.791201 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:16:06.812845 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:16:06.813197 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:16:06.820488 3329885 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:16:06.820834 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:16:06.820880 3329885 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:16:06.986413 3329885 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:16:06.986538 3329885 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:20:06.987125 3329885 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117732s
	I0111 08:20:06.987156 3329885 kubeadm.go:319] 
	I0111 08:20:06.987538 3329885 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:20:06.987650 3329885 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:20:06.987914 3329885 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:20:06.987923 3329885 kubeadm.go:319] 
	I0111 08:20:06.988383 3329885 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:20:06.988449 3329885 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:20:06.988619 3329885 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:20:06.988627 3329885 kubeadm.go:319] 
	I0111 08:20:06.994037 3329885 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:20:06.994459 3329885 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:20:06.994571 3329885 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:20:06.994810 3329885 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:20:06.994819 3329885 kubeadm.go:319] 
	I0111 08:20:06.994887 3329885 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:20:06.994944 3329885 kubeadm.go:403] duration metric: took 8m7.611643479s to StartCluster
	I0111 08:20:06.994981 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0111 08:20:06.995043 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 08:20:07.022617 3329885 cri.go:96] found id: ""
	I0111 08:20:07.022699 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.022717 3329885 logs.go:284] No container was found matching "kube-apiserver"
	I0111 08:20:07.022724 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0111 08:20:07.022804 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 08:20:07.050589 3329885 cri.go:96] found id: ""
	I0111 08:20:07.050614 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.050623 3329885 logs.go:284] No container was found matching "etcd"
	I0111 08:20:07.050629 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0111 08:20:07.050713 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 08:20:07.076582 3329885 cri.go:96] found id: ""
	I0111 08:20:07.076608 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.076618 3329885 logs.go:284] No container was found matching "coredns"
	I0111 08:20:07.076625 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0111 08:20:07.076719 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 08:20:07.103212 3329885 cri.go:96] found id: ""
	I0111 08:20:07.103238 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.103247 3329885 logs.go:284] No container was found matching "kube-scheduler"
	I0111 08:20:07.103254 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0111 08:20:07.103318 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 08:20:07.129632 3329885 cri.go:96] found id: ""
	I0111 08:20:07.129709 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.129733 3329885 logs.go:284] No container was found matching "kube-proxy"
	I0111 08:20:07.129744 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 08:20:07.129817 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 08:20:07.155361 3329885 cri.go:96] found id: ""
	I0111 08:20:07.155388 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.155397 3329885 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 08:20:07.155404 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0111 08:20:07.155466 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 08:20:07.180698 3329885 cri.go:96] found id: ""
	I0111 08:20:07.180793 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.180810 3329885 logs.go:284] No container was found matching "kindnet"
	I0111 08:20:07.180822 3329885 logs.go:123] Gathering logs for kubelet ...
	I0111 08:20:07.180834 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 08:20:07.237611 3329885 logs.go:123] Gathering logs for dmesg ...
	I0111 08:20:07.237644 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 08:20:07.252588 3329885 logs.go:123] Gathering logs for describe nodes ...
	I0111 08:20:07.252615 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 08:20:07.317178 3329885 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:20:07.309153    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.309712    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.311194    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.311610    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.313064    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 08:20:07.309153    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.309712    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.311194    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.311610    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.313064    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 08:20:07.317197 3329885 logs.go:123] Gathering logs for containerd ...
	I0111 08:20:07.317211 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0111 08:20:07.357000 3329885 logs.go:123] Gathering logs for container status ...
	I0111 08:20:07.357043 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 08:20:07.386881 3329885 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 08:20:07.386931 3329885 out.go:285] * 
	* 
	W0111 08:20:07.386981 3329885 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:20:07.386998 3329885 out.go:285] * 
	* 
	W0111 08:20:07.387249 3329885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:20:07.394169 3329885 out.go:203] 
	W0111 08:20:07.397066 3329885 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:20:07.397111 3329885 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 08:20:07.397136 3329885 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 08:20:07.400215 3329885 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-610060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-610060 ssh "cat /etc/containerd/config.toml"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-11 08:20:07.804164748 +0000 UTC m=+3265.838463320
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-610060
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-610060:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332",
	        "Created": "2026-01-11T08:11:50.813449934Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3330323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:11:50.887790565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332/hostname",
	        "HostsPath": "/var/lib/docker/containers/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332/hosts",
	        "LogPath": "/var/lib/docker/containers/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332/13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332-json.log",
	        "Name": "/force-systemd-flag-610060",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-610060:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-610060",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13258c8511dbba990f568c01b6080fc04fe8fac10db23212692b172424c9f332",
	                "LowerDir": "/var/lib/docker/overlay2/3303dc1dfcec74d7fa27dc3d78662cfd4ed7429ac134fa3faec78dbcda10adbb-init/diff:/var/lib/docker/overlay2/df463cec8bfb6e167fe65d2de959d2835d839df5d29dad0284e7abf6afbac443/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3303dc1dfcec74d7fa27dc3d78662cfd4ed7429ac134fa3faec78dbcda10adbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3303dc1dfcec74d7fa27dc3d78662cfd4ed7429ac134fa3faec78dbcda10adbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3303dc1dfcec74d7fa27dc3d78662cfd4ed7429ac134fa3faec78dbcda10adbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-610060",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-610060/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-610060",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-610060",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-610060",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e09bffb69bfeaa2e9e1334ad39b7ef5deb66204a099396be0fedeac63070bd3b",
	            "SandboxKey": "/var/run/docker/netns/e09bffb69bfe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-610060": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:95:b0:9d:ee:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a1d8b67bcadc5f12a2e757111f8d5de32531915336d5c492f2148c9847055be3",
	                    "EndpointID": "2f08efa16d250bd664c6cebb474c0a73513564f8eebe1a31d64c958ff5d39f91",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-610060",
	                        "13258c8511db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-610060 -n force-systemd-flag-610060
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-610060 -n force-systemd-flag-610060: exit status 6 (311.010865ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:20:08.117839 3358915 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-610060" does not appear in /home/jenkins/minikube-integration/22402-3122619/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-610060 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ delete  │ -p cert-options-554375                                                                                                                                                                                                                              │ cert-options-554375       │ jenkins │ v1.37.0 │ 11 Jan 26 08:14 UTC │ 11 Jan 26 08:14 UTC │
	│ start   │ -p old-k8s-version-334404 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:14 UTC │ 11 Jan 26 08:15 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-334404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:15 UTC │ 11 Jan 26 08:15 UTC │
	│ stop    │ -p old-k8s-version-334404 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:15 UTC │ 11 Jan 26 08:15 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-334404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:15 UTC │ 11 Jan 26 08:15 UTC │
	│ start   │ -p old-k8s-version-334404 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:15 UTC │ 11 Jan 26 08:16 UTC │
	│ image   │ old-k8s-version-334404 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
	│ pause   │ -p old-k8s-version-334404 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
	│ unpause │ -p old-k8s-version-334404 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
	│ delete  │ -p old-k8s-version-334404                                                                                                                                                                                                                           │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
	│ delete  │ -p old-k8s-version-334404                                                                                                                                                                                                                           │ old-k8s-version-334404    │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:16 UTC │
	│ start   │ -p no-preload-563183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:16 UTC │ 11 Jan 26 08:17 UTC │
	│ addons  │ enable metrics-server -p no-preload-563183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │ 11 Jan 26 08:17 UTC │
	│ stop    │ -p no-preload-563183 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │ 11 Jan 26 08:17 UTC │
	│ addons  │ enable dashboard -p no-preload-563183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │ 11 Jan 26 08:17 UTC │
	│ start   │ -p no-preload-563183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                       │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:17 UTC │ 11 Jan 26 08:18 UTC │
	│ image   │ no-preload-563183 image list --format=json                                                                                                                                                                                                          │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:18 UTC │ 11 Jan 26 08:18 UTC │
	│ pause   │ -p no-preload-563183 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:18 UTC │ 11 Jan 26 08:18 UTC │
	│ unpause │ -p no-preload-563183 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:18 UTC │ 11 Jan 26 08:18 UTC │
	│ delete  │ -p no-preload-563183                                                                                                                                                                                                                                │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:18 UTC │ 11 Jan 26 08:19 UTC │
	│ delete  │ -p no-preload-563183                                                                                                                                                                                                                                │ no-preload-563183         │ jenkins │ v1.37.0 │ 11 Jan 26 08:19 UTC │ 11 Jan 26 08:19 UTC │
	│ start   │ -p embed-certs-239792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0                                                                                        │ embed-certs-239792        │ jenkins │ v1.37.0 │ 11 Jan 26 08:19 UTC │ 11 Jan 26 08:19 UTC │
	│ addons  │ enable metrics-server -p embed-certs-239792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-239792        │ jenkins │ v1.37.0 │ 11 Jan 26 08:19 UTC │ 11 Jan 26 08:19 UTC │
	│ stop    │ -p embed-certs-239792 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-239792        │ jenkins │ v1.37.0 │ 11 Jan 26 08:19 UTC │                     │
	│ ssh     │ force-systemd-flag-610060 ssh cat /etc/containerd/config.toml                                                                                                                                                                                       │ force-systemd-flag-610060 │ jenkins │ v1.37.0 │ 11 Jan 26 08:20 UTC │ 11 Jan 26 08:20 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:19:02
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:19:02.476072 3354790 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:19:02.476232 3354790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:19:02.476242 3354790 out.go:374] Setting ErrFile to fd 2...
	I0111 08:19:02.476248 3354790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:19:02.476560 3354790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 08:19:02.477023 3354790 out.go:368] Setting JSON to false
	I0111 08:19:02.477864 3354790 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":50494,"bootTime":1768069049,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 08:19:02.477934 3354790 start.go:143] virtualization:  
	I0111 08:19:02.484119 3354790 out.go:179] * [embed-certs-239792] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:19:02.487599 3354790 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:19:02.487716 3354790 notify.go:221] Checking for updates...
	I0111 08:19:02.494008 3354790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:19:02.497155 3354790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 08:19:02.500131 3354790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 08:19:02.503189 3354790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:19:02.506234 3354790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:19:02.509650 3354790 config.go:182] Loaded profile config "force-systemd-flag-610060": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:19:02.509774 3354790 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:19:02.546473 3354790 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:19:02.546595 3354790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:19:02.629622 3354790 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:19:02.620293113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:19:02.629728 3354790 docker.go:319] overlay module found
	I0111 08:19:02.634970 3354790 out.go:179] * Using the docker driver based on user configuration
	I0111 08:19:02.637937 3354790 start.go:309] selected driver: docker
	I0111 08:19:02.637962 3354790 start.go:928] validating driver "docker" against <nil>
	I0111 08:19:02.637976 3354790 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:19:02.638745 3354790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:19:02.699375 3354790 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:19:02.689814575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:19:02.699537 3354790 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:19:02.699772 3354790 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 08:19:02.702839 3354790 out.go:179] * Using Docker driver with root privileges
	I0111 08:19:02.705866 3354790 cni.go:84] Creating CNI manager for ""
	I0111 08:19:02.705941 3354790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:19:02.705955 3354790 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:19:02.706039 3354790 start.go:353] cluster config:
	{Name:embed-certs-239792 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:19:02.709108 3354790 out.go:179] * Starting "embed-certs-239792" primary control-plane node in "embed-certs-239792" cluster
	I0111 08:19:02.711876 3354790 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0111 08:19:02.714741 3354790 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:19:02.717565 3354790 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:19:02.717566 3354790 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:19:02.717637 3354790 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I0111 08:19:02.717646 3354790 cache.go:65] Caching tarball of preloaded images
	I0111 08:19:02.717727 3354790 preload.go:251] Found /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0111 08:19:02.717737 3354790 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I0111 08:19:02.717847 3354790 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/config.json ...
	I0111 08:19:02.717869 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/config.json: {Name:mk6ff7aa76924208f5adafe031a39c23e80e0d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:02.736012 3354790 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:19:02.736037 3354790 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:19:02.736061 3354790 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:19:02.736093 3354790 start.go:360] acquireMachinesLock for embed-certs-239792: {Name:mk5b08453b2b6902642bd60cad5e87b3738323be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:19:02.736214 3354790 start.go:364] duration metric: took 97.705µs to acquireMachinesLock for "embed-certs-239792"
	I0111 08:19:02.736246 3354790 start.go:93] Provisioning new machine with config: &{Name:embed-certs-239792 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0111 08:19:02.736348 3354790 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:19:02.739776 3354790 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:19:02.740013 3354790 start.go:159] libmachine.API.Create for "embed-certs-239792" (driver="docker")
	I0111 08:19:02.740049 3354790 client.go:173] LocalClient.Create starting
	I0111 08:19:02.740123 3354790 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem
	I0111 08:19:02.740162 3354790 main.go:144] libmachine: Decoding PEM data...
	I0111 08:19:02.740181 3354790 main.go:144] libmachine: Parsing certificate...
	I0111 08:19:02.740237 3354790 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem
	I0111 08:19:02.740266 3354790 main.go:144] libmachine: Decoding PEM data...
	I0111 08:19:02.740277 3354790 main.go:144] libmachine: Parsing certificate...
	I0111 08:19:02.740767 3354790 cli_runner.go:164] Run: docker network inspect embed-certs-239792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:19:02.757322 3354790 cli_runner.go:211] docker network inspect embed-certs-239792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:19:02.757411 3354790 network_create.go:284] running [docker network inspect embed-certs-239792] to gather additional debugging logs...
	I0111 08:19:02.757436 3354790 cli_runner.go:164] Run: docker network inspect embed-certs-239792
	W0111 08:19:02.772184 3354790 cli_runner.go:211] docker network inspect embed-certs-239792 returned with exit code 1
	I0111 08:19:02.772222 3354790 network_create.go:287] error running [docker network inspect embed-certs-239792]: docker network inspect embed-certs-239792: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-239792 not found
	I0111 08:19:02.772236 3354790 network_create.go:289] output of [docker network inspect embed-certs-239792]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-239792 not found
	
	** /stderr **
	I0111 08:19:02.772460 3354790 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:19:02.789854 3354790 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d6a2604bb10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:cd:63:f9:b2:f8} reservation:<nil>}
	I0111 08:19:02.790351 3354790 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cec031213447 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:71:bf:56:ac:cb} reservation:<nil>}
	I0111 08:19:02.790630 3354790 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0e2d137ca1da IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:68:81:9e:35:63} reservation:<nil>}
	I0111 08:19:02.791263 3354790 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cd120}
	I0111 08:19:02.791296 3354790 network_create.go:124] attempt to create docker network embed-certs-239792 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0111 08:19:02.791411 3354790 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-239792 embed-certs-239792
	I0111 08:19:02.855591 3354790 network_create.go:108] docker network embed-certs-239792 192.168.76.0/24 created
	I0111 08:19:02.855627 3354790 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-239792" container
	I0111 08:19:02.855706 3354790 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:19:02.872321 3354790 cli_runner.go:164] Run: docker volume create embed-certs-239792 --label name.minikube.sigs.k8s.io=embed-certs-239792 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:19:02.889956 3354790 oci.go:103] Successfully created a docker volume embed-certs-239792
	I0111 08:19:02.890047 3354790 cli_runner.go:164] Run: docker run --rm --name embed-certs-239792-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-239792 --entrypoint /usr/bin/test -v embed-certs-239792:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:19:03.403926 3354790 oci.go:107] Successfully prepared a docker volume embed-certs-239792
	I0111 08:19:03.403996 3354790 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:19:03.404008 3354790 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:19:03.404081 3354790 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-239792:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:19:07.260156 3354790 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-239792:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.856039779s)
	I0111 08:19:07.260187 3354790 kic.go:203] duration metric: took 3.856175735s to extract preloaded images to volume ...
	W0111 08:19:07.260369 3354790 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:19:07.260487 3354790 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:19:07.320710 3354790 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-239792 --name embed-certs-239792 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-239792 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-239792 --network embed-certs-239792 --ip 192.168.76.2 --volume embed-certs-239792:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:19:07.656032 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Running}}
	I0111 08:19:07.680144 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
	I0111 08:19:07.703298 3354790 cli_runner.go:164] Run: docker exec embed-certs-239792 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:19:07.754381 3354790 oci.go:144] the created container "embed-certs-239792" has a running status.
	I0111 08:19:07.754409 3354790 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa...
	I0111 08:19:07.906741 3354790 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:19:07.930180 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
	I0111 08:19:07.952075 3354790 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:19:07.952093 3354790 kic_runner.go:114] Args: [docker exec --privileged embed-certs-239792 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:19:08.000055 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
	I0111 08:19:08.023510 3354790 machine.go:94] provisionDockerMachine start ...
	I0111 08:19:08.023600 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:08.054227 3354790 main.go:144] libmachine: Using SSH client type: native
	I0111 08:19:08.054566 3354790 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35843 <nil> <nil>}
	I0111 08:19:08.054581 3354790 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:19:08.055154 3354790 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39750->127.0.0.1:35843: read: connection reset by peer
	I0111 08:19:11.208272 3354790 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-239792
	
	I0111 08:19:11.208368 3354790 ubuntu.go:182] provisioning hostname "embed-certs-239792"
	I0111 08:19:11.208464 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:11.226933 3354790 main.go:144] libmachine: Using SSH client type: native
	I0111 08:19:11.227248 3354790 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35843 <nil> <nil>}
	I0111 08:19:11.227265 3354790 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-239792 && echo "embed-certs-239792" | sudo tee /etc/hostname
	I0111 08:19:11.385593 3354790 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-239792
	
	I0111 08:19:11.385776 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:11.403545 3354790 main.go:144] libmachine: Using SSH client type: native
	I0111 08:19:11.403849 3354790 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35843 <nil> <nil>}
	I0111 08:19:11.403865 3354790 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-239792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-239792/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-239792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:19:11.552833 3354790 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:19:11.552864 3354790 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-3122619/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-3122619/.minikube}
	I0111 08:19:11.552886 3354790 ubuntu.go:190] setting up certificates
	I0111 08:19:11.552895 3354790 provision.go:84] configureAuth start
	I0111 08:19:11.552968 3354790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-239792
	I0111 08:19:11.570815 3354790 provision.go:143] copyHostCerts
	I0111 08:19:11.570878 3354790 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem, removing ...
	I0111 08:19:11.570886 3354790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
	I0111 08:19:11.570964 3354790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem (1078 bytes)
	I0111 08:19:11.571059 3354790 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem, removing ...
	I0111 08:19:11.571065 3354790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
	I0111 08:19:11.571089 3354790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem (1123 bytes)
	I0111 08:19:11.571142 3354790 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem, removing ...
	I0111 08:19:11.571147 3354790 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
	I0111 08:19:11.571168 3354790 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem (1675 bytes)
	I0111 08:19:11.571211 3354790 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem org=jenkins.embed-certs-239792 san=[127.0.0.1 192.168.76.2 embed-certs-239792 localhost minikube]
	I0111 08:19:11.697056 3354790 provision.go:177] copyRemoteCerts
	I0111 08:19:11.697128 3354790 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:19:11.697172 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:11.715383 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
	I0111 08:19:11.820779 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:19:11.841550 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0111 08:19:11.862065 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:19:11.879110 3354790 provision.go:87] duration metric: took 326.189464ms to configureAuth
	I0111 08:19:11.879191 3354790 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:19:11.879406 3354790 config.go:182] Loaded profile config "embed-certs-239792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:19:11.879423 3354790 machine.go:97] duration metric: took 3.855894305s to provisionDockerMachine
	I0111 08:19:11.879431 3354790 client.go:176] duration metric: took 9.139371359s to LocalClient.Create
	I0111 08:19:11.879450 3354790 start.go:167] duration metric: took 9.139438393s to libmachine.API.Create "embed-certs-239792"
	I0111 08:19:11.879457 3354790 start.go:293] postStartSetup for "embed-certs-239792" (driver="docker")
	I0111 08:19:11.879472 3354790 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:19:11.879532 3354790 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:19:11.879577 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:11.896718 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
	I0111 08:19:12.008666 3354790 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:19:12.012948 3354790 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:19:12.012979 3354790 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:19:12.012992 3354790 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/addons for local assets ...
	I0111 08:19:12.013102 3354790 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/files for local assets ...
	I0111 08:19:12.013217 3354790 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> 31244842.pem in /etc/ssl/certs
	I0111 08:19:12.013335 3354790 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:19:12.021915 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /etc/ssl/certs/31244842.pem (1708 bytes)
	I0111 08:19:12.040856 3354790 start.go:296] duration metric: took 161.384464ms for postStartSetup
	I0111 08:19:12.041244 3354790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-239792
	I0111 08:19:12.058351 3354790 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/config.json ...
	I0111 08:19:12.058661 3354790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:19:12.058712 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:12.075416 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
	I0111 08:19:12.177555 3354790 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:19:12.182584 3354790 start.go:128] duration metric: took 9.446219644s to createHost
	I0111 08:19:12.182611 3354790 start.go:83] releasing machines lock for "embed-certs-239792", held for 9.446382725s
	I0111 08:19:12.182706 3354790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-239792
	I0111 08:19:12.199196 3354790 ssh_runner.go:195] Run: cat /version.json
	I0111 08:19:12.199245 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:12.199265 3354790 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:19:12.199322 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:12.217368 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
	I0111 08:19:12.228021 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
	I0111 08:19:12.420699 3354790 ssh_runner.go:195] Run: systemctl --version
	I0111 08:19:12.427351 3354790 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:19:12.431803 3354790 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:19:12.431952 3354790 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:19:12.460179 3354790 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:19:12.460252 3354790 start.go:496] detecting cgroup driver to use...
	I0111 08:19:12.460335 3354790 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 08:19:12.460403 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0111 08:19:12.475336 3354790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0111 08:19:12.488537 3354790 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:19:12.488633 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:19:12.506770 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:19:12.525173 3354790 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:19:12.671379 3354790 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:19:12.797777 3354790 docker.go:234] disabling docker service ...
	I0111 08:19:12.797849 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:19:12.821196 3354790 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:19:12.834644 3354790 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:19:12.958805 3354790 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:19:13.083683 3354790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:19:13.097191 3354790 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:19:13.112629 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0111 08:19:13.122171 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0111 08:19:13.131227 3354790 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I0111 08:19:13.131338 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0111 08:19:13.140438 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:19:13.149884 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0111 08:19:13.158791 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:19:13.167913 3354790 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:19:13.176867 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0111 08:19:13.186081 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0111 08:19:13.194914 3354790 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0111 08:19:13.204100 3354790 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:19:13.212221 3354790 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:19:13.219910 3354790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:19:13.357285 3354790 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0111 08:19:13.486148 3354790 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I0111 08:19:13.486218 3354790 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0111 08:19:13.490524 3354790 start.go:574] Will wait 60s for crictl version
	I0111 08:19:13.490602 3354790 ssh_runner.go:195] Run: which crictl
	I0111 08:19:13.494183 3354790 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:19:13.518293 3354790 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I0111 08:19:13.518362 3354790 ssh_runner.go:195] Run: containerd --version
	I0111 08:19:13.540885 3354790 ssh_runner.go:195] Run: containerd --version
	I0111 08:19:13.565153 3354790 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I0111 08:19:13.568318 3354790 cli_runner.go:164] Run: docker network inspect embed-certs-239792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:19:13.584596 3354790 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 08:19:13.588533 3354790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:19:13.598614 3354790 kubeadm.go:884] updating cluster {Name:embed-certs-239792 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:19:13.598733 3354790 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:19:13.598807 3354790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:19:13.625049 3354790 containerd.go:635] all images are preloaded for containerd runtime.
	I0111 08:19:13.625074 3354790 containerd.go:542] Images already preloaded, skipping extraction
	I0111 08:19:13.625136 3354790 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:19:13.650215 3354790 containerd.go:635] all images are preloaded for containerd runtime.
	I0111 08:19:13.650241 3354790 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:19:13.650249 3354790 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I0111 08:19:13.650347 3354790 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-239792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:19:13.650414 3354790 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I0111 08:19:13.680955 3354790 cni.go:84] Creating CNI manager for ""
	I0111 08:19:13.680987 3354790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:19:13.681010 3354790 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:19:13.681034 3354790 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-239792 NodeName:embed-certs-239792 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:19:13.681152 3354790 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-239792"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:19:13.681226 3354790 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:19:13.689351 3354790 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:19:13.689477 3354790 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:19:13.697540 3354790 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0111 08:19:13.710955 3354790 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:19:13.724046 3354790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2251 bytes)
	I0111 08:19:13.737418 3354790 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:19:13.741282 3354790 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:19:13.750773 3354790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:19:13.867134 3354790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:19:13.884767 3354790 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792 for IP: 192.168.76.2
	I0111 08:19:13.884790 3354790 certs.go:195] generating shared ca certs ...
	I0111 08:19:13.884807 3354790 certs.go:227] acquiring lock for ca certs: {Name:mk4f88e5992499f3a8089baf463e3ba7f81a52c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:13.884965 3354790 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key
	I0111 08:19:13.885013 3354790 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key
	I0111 08:19:13.885024 3354790 certs.go:257] generating profile certs ...
	I0111 08:19:13.885081 3354790 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.key
	I0111 08:19:13.885106 3354790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.crt with IP's: []
	I0111 08:19:13.943644 3354790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.crt ...
	I0111 08:19:13.943680 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.crt: {Name:mk0842e9d75ef0cac3d0190ac4ce2d004aad0c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:13.943905 3354790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.key ...
	I0111 08:19:13.943924 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/client.key: {Name:mk13bf00d0593eaef359005936d66f6485336a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:13.944042 3354790 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key.15041368
	I0111 08:19:13.944062 3354790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt.15041368 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0111 08:19:14.167158 3354790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt.15041368 ...
	I0111 08:19:14.167189 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt.15041368: {Name:mkcdfd32e371e00491dd784b356a0a4a3153fe58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:14.167380 3354790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key.15041368 ...
	I0111 08:19:14.167397 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key.15041368: {Name:mk146db4887bf4ca2b0df30bc734540a70e203e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:14.167495 3354790 certs.go:382] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt.15041368 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt
	I0111 08:19:14.167609 3354790 certs.go:386] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key.15041368 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key
	I0111 08:19:14.167674 3354790 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.key
	I0111 08:19:14.167695 3354790 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.crt with IP's: []
	I0111 08:19:14.559448 3354790 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.crt ...
	I0111 08:19:14.559481 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.crt: {Name:mk2f94ec1f047c5b25028b0889400ee386ffd990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:14.559666 3354790 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.key ...
	I0111 08:19:14.559681 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.key: {Name:mkd47bf0b7a40371f79943d77ef7b1cce27993f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:14.559875 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem (1338 bytes)
	W0111 08:19:14.559919 3354790 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484_empty.pem, impossibly tiny 0 bytes
	I0111 08:19:14.559932 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 08:19:14.559958 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:19:14.559988 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:19:14.560017 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem (1675 bytes)
	I0111 08:19:14.560067 3354790 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem (1708 bytes)
	I0111 08:19:14.560648 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:19:14.580052 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 08:19:14.598952 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:19:14.617013 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:19:14.635698 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0111 08:19:14.653629 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:19:14.674994 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:19:14.693415 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/embed-certs-239792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 08:19:14.711755 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /usr/share/ca-certificates/31244842.pem (1708 bytes)
	I0111 08:19:14.730229 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:19:14.748738 3354790 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem --> /usr/share/ca-certificates/3124484.pem (1338 bytes)
	I0111 08:19:14.766801 3354790 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:19:14.780407 3354790 ssh_runner.go:195] Run: openssl version
	I0111 08:19:14.786715 3354790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/31244842.pem
	I0111 08:19:14.811910 3354790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/31244842.pem /etc/ssl/certs/31244842.pem
	I0111 08:19:14.832709 3354790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31244842.pem
	I0111 08:19:14.839038 3354790 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:32 /usr/share/ca-certificates/31244842.pem
	I0111 08:19:14.839107 3354790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31244842.pem
	I0111 08:19:14.901727 3354790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:19:14.909372 3354790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/31244842.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:19:14.917084 3354790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:19:14.924955 3354790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:19:14.932648 3354790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:19:14.936491 3354790 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:26 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:19:14.936605 3354790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:19:14.978152 3354790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:19:14.986215 3354790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:19:14.994088 3354790 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3124484.pem
	I0111 08:19:15.007465 3354790 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3124484.pem /etc/ssl/certs/3124484.pem
	I0111 08:19:15.020971 3354790 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3124484.pem
	I0111 08:19:15.025889 3354790 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:32 /usr/share/ca-certificates/3124484.pem
	I0111 08:19:15.025991 3354790 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3124484.pem
	I0111 08:19:15.070662 3354790 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:19:15.078790 3354790 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3124484.pem /etc/ssl/certs/51391683.0
	I0111 08:19:15.087251 3354790 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:19:15.091300 3354790 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:19:15.091360 3354790 kubeadm.go:401] StartCluster: {Name:embed-certs-239792 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-239792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:19:15.091448 3354790 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0111 08:19:15.091523 3354790 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:19:15.131620 3354790 cri.go:96] found id: ""
	I0111 08:19:15.131712 3354790 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:19:15.139974 3354790 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:19:15.148194 3354790 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:19:15.148315 3354790 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:19:15.156577 3354790 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:19:15.156602 3354790 kubeadm.go:158] found existing configuration files:
	
	I0111 08:19:15.156723 3354790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:19:15.165010 3354790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:19:15.165089 3354790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:19:15.172999 3354790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:19:15.181101 3354790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:19:15.181203 3354790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:19:15.188973 3354790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:19:15.196881 3354790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:19:15.196952 3354790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:19:15.204688 3354790 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:19:15.212543 3354790 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:19:15.212661 3354790 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:19:15.220182 3354790 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:19:15.259052 3354790 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:19:15.259180 3354790 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:19:15.329844 3354790 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:19:15.329983 3354790 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:19:15.330045 3354790 kubeadm.go:319] OS: Linux
	I0111 08:19:15.330117 3354790 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:19:15.330181 3354790 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:19:15.330260 3354790 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:19:15.330334 3354790 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:19:15.330424 3354790 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:19:15.330495 3354790 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:19:15.330569 3354790 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:19:15.330637 3354790 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:19:15.330714 3354790 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:19:15.398924 3354790 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:19:15.399107 3354790 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:19:15.399236 3354790 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:19:15.408650 3354790 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:19:15.415105 3354790 out.go:252]   - Generating certificates and keys ...
	I0111 08:19:15.415212 3354790 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:19:15.415286 3354790 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:19:15.681486 3354790 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:19:15.895857 3354790 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:19:16.383963 3354790 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:19:16.487976 3354790 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:19:16.714930 3354790 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:19:16.715311 3354790 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-239792 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:19:16.797140 3354790 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:19:16.797507 3354790 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-239792 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:19:17.442442 3354790 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:19:17.923345 3354790 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:19:18.300542 3354790 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:19:18.300826 3354790 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:19:18.587740 3354790 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:19:18.728555 3354790 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:19:19.148556 3354790 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:19:19.770672 3354790 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:19:19.915779 3354790 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:19:19.916501 3354790 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:19:19.919187 3354790 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:19:19.922908 3354790 out.go:252]   - Booting up control plane ...
	I0111 08:19:19.923024 3354790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:19:19.923115 3354790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:19:19.923182 3354790 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:19:19.939571 3354790 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:19:19.939937 3354790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:19:19.947697 3354790 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:19:19.948188 3354790 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:19:19.948236 3354790 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:19:20.099965 3354790 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:19:20.100087 3354790 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:19:20.603577 3354790 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 503.790259ms
	I0111 08:19:20.607424 3354790 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 08:19:20.607528 3354790 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0111 08:19:20.607624 3354790 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 08:19:20.608165 3354790 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 08:19:23.617247 3354790 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.008856075s
	I0111 08:19:24.679203 3354790 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.070512111s
	I0111 08:19:26.610243 3354790 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002387831s
	I0111 08:19:26.649095 3354790 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 08:19:26.678273 3354790 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 08:19:26.696184 3354790 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 08:19:26.696463 3354790 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-239792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 08:19:26.710449 3354790 kubeadm.go:319] [bootstrap-token] Using token: y49yks.7byb0evlqnwu15qk
	I0111 08:19:26.713334 3354790 out.go:252]   - Configuring RBAC rules ...
	I0111 08:19:26.713461 3354790 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 08:19:26.724335 3354790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 08:19:26.734879 3354790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 08:19:26.741639 3354790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 08:19:26.748473 3354790 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 08:19:26.753106 3354790 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 08:19:27.017622 3354790 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 08:19:27.447726 3354790 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 08:19:28.021172 3354790 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 08:19:28.022489 3354790 kubeadm.go:319] 
	I0111 08:19:28.022572 3354790 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 08:19:28.022583 3354790 kubeadm.go:319] 
	I0111 08:19:28.022662 3354790 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 08:19:28.022672 3354790 kubeadm.go:319] 
	I0111 08:19:28.022697 3354790 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 08:19:28.022760 3354790 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 08:19:28.022814 3354790 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 08:19:28.022822 3354790 kubeadm.go:319] 
	I0111 08:19:28.022877 3354790 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 08:19:28.022885 3354790 kubeadm.go:319] 
	I0111 08:19:28.022933 3354790 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 08:19:28.022941 3354790 kubeadm.go:319] 
	I0111 08:19:28.022993 3354790 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 08:19:28.023072 3354790 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 08:19:28.023144 3354790 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 08:19:28.023152 3354790 kubeadm.go:319] 
	I0111 08:19:28.023236 3354790 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 08:19:28.023317 3354790 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 08:19:28.023324 3354790 kubeadm.go:319] 
	I0111 08:19:28.023415 3354790 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token y49yks.7byb0evlqnwu15qk \
	I0111 08:19:28.023523 3354790 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7fbdaf1f31f22210647da770a1c9ea2e312ca3de8444edfd85d94f45129ca0e7 \
	I0111 08:19:28.023547 3354790 kubeadm.go:319] 	--control-plane 
	I0111 08:19:28.023555 3354790 kubeadm.go:319] 
	I0111 08:19:28.023639 3354790 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 08:19:28.023647 3354790 kubeadm.go:319] 
	I0111 08:19:28.023729 3354790 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y49yks.7byb0evlqnwu15qk \
	I0111 08:19:28.023834 3354790 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7fbdaf1f31f22210647da770a1c9ea2e312ca3de8444edfd85d94f45129ca0e7 
	I0111 08:19:28.027094 3354790 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:19:28.027512 3354790 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:19:28.027623 3354790 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:19:28.027640 3354790 cni.go:84] Creating CNI manager for ""
	I0111 08:19:28.027654 3354790 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:19:28.031273 3354790 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0111 08:19:28.034237 3354790 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0111 08:19:28.039981 3354790 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0111 08:19:28.040004 3354790 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0111 08:19:28.074803 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0111 08:19:28.414251 3354790 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 08:19:28.414386 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:28.414488 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-239792 minikube.k8s.io/updated_at=2026_01_11T08_19_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=embed-certs-239792 minikube.k8s.io/primary=true
	I0111 08:19:28.584973 3354790 ops.go:34] apiserver oom_adj: -16
	I0111 08:19:28.585084 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:29.085402 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:29.585215 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:30.085288 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:30.585504 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:31.085272 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:31.585902 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:32.085606 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:32.585666 3354790 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:19:32.686303 3354790 kubeadm.go:1114] duration metric: took 4.271967252s to wait for elevateKubeSystemPrivileges
	I0111 08:19:32.686337 3354790 kubeadm.go:403] duration metric: took 17.594983028s to StartCluster
	I0111 08:19:32.686354 3354790 settings.go:142] acquiring lock: {Name:mk941d920a0aafe770355773bf43dee753cabb3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:32.686419 3354790 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 08:19:32.687507 3354790 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/kubeconfig: {Name:mk89d287b8f00e4766af7713066504256c0503e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:19:32.687744 3354790 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0111 08:19:32.687864 3354790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 08:19:32.688112 3354790 config.go:182] Loaded profile config "embed-certs-239792": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:19:32.688154 3354790 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0111 08:19:32.688219 3354790 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-239792"
	I0111 08:19:32.688234 3354790 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-239792"
	I0111 08:19:32.688260 3354790 host.go:66] Checking if "embed-certs-239792" exists ...
	I0111 08:19:32.688494 3354790 addons.go:70] Setting default-storageclass=true in profile "embed-certs-239792"
	I0111 08:19:32.688513 3354790 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-239792"
	I0111 08:19:32.689008 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
	I0111 08:19:32.689127 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
	I0111 08:19:32.691678 3354790 out.go:179] * Verifying Kubernetes components...
	I0111 08:19:32.700525 3354790 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:19:32.724312 3354790 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0111 08:19:32.727620 3354790 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 08:19:32.727644 3354790 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0111 08:19:32.727718 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:32.733858 3354790 addons.go:239] Setting addon default-storageclass=true in "embed-certs-239792"
	I0111 08:19:32.733904 3354790 host.go:66] Checking if "embed-certs-239792" exists ...
	I0111 08:19:32.734335 3354790 cli_runner.go:164] Run: docker container inspect embed-certs-239792 --format={{.State.Status}}
	I0111 08:19:32.764430 3354790 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0111 08:19:32.764450 3354790 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0111 08:19:32.764513 3354790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-239792
	I0111 08:19:32.787253 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
	I0111 08:19:32.798340 3354790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35843 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/embed-certs-239792/id_rsa Username:docker}
	I0111 08:19:33.023935 3354790 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0111 08:19:33.045094 3354790 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:19:33.079694 3354790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0111 08:19:33.096495 3354790 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0111 08:19:33.457640 3354790 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0111 08:19:33.459826 3354790 node_ready.go:35] waiting up to 6m0s for node "embed-certs-239792" to be "Ready" ...
	I0111 08:19:33.964362 3354790 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-239792" context rescaled to 1 replicas
	I0111 08:19:33.979828 3354790 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0111 08:19:33.982623 3354790 addons.go:530] duration metric: took 1.294456351s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0111 08:19:35.462674 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
	W0111 08:19:37.463204 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
	W0111 08:19:39.963372 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
	W0111 08:19:41.963855 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
	W0111 08:19:43.963990 3354790 node_ready.go:57] node "embed-certs-239792" has "Ready":"False" status (will retry)
	I0111 08:19:45.470437 3354790 node_ready.go:49] node "embed-certs-239792" is "Ready"
	I0111 08:19:45.470475 3354790 node_ready.go:38] duration metric: took 12.010582077s for node "embed-certs-239792" to be "Ready" ...
	I0111 08:19:45.470494 3354790 api_server.go:52] waiting for apiserver process to appear ...
	I0111 08:19:45.470586 3354790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 08:19:45.520226 3354790 api_server.go:72] duration metric: took 12.832443479s to wait for apiserver process to appear ...
	I0111 08:19:45.520258 3354790 api_server.go:88] waiting for apiserver healthz status ...
	I0111 08:19:45.520338 3354790 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0111 08:19:45.536662 3354790 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0111 08:19:45.548192 3354790 api_server.go:141] control plane version: v1.35.0
	I0111 08:19:45.548225 3354790 api_server.go:131] duration metric: took 27.958128ms to wait for apiserver health ...
	I0111 08:19:45.548234 3354790 system_pods.go:43] waiting for kube-system pods to appear ...
	I0111 08:19:45.555121 3354790 system_pods.go:59] 8 kube-system pods found
	I0111 08:19:45.555169 3354790 system_pods.go:61] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:19:45.555181 3354790 system_pods.go:61] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:19:45.555188 3354790 system_pods.go:61] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
	I0111 08:19:45.555195 3354790 system_pods.go:61] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
	I0111 08:19:45.555202 3354790 system_pods.go:61] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
	I0111 08:19:45.555206 3354790 system_pods.go:61] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
	I0111 08:19:45.555211 3354790 system_pods.go:61] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
	I0111 08:19:45.555220 3354790 system_pods.go:61] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 08:19:45.555227 3354790 system_pods.go:74] duration metric: took 6.986863ms to wait for pod list to return data ...
	I0111 08:19:45.555239 3354790 default_sa.go:34] waiting for default service account to be created ...
	I0111 08:19:45.573583 3354790 default_sa.go:45] found service account: "default"
	I0111 08:19:45.573617 3354790 default_sa.go:55] duration metric: took 18.371036ms for default service account to be created ...
	I0111 08:19:45.573631 3354790 system_pods.go:116] waiting for k8s-apps to be running ...
	I0111 08:19:45.584573 3354790 system_pods.go:86] 8 kube-system pods found
	I0111 08:19:45.584615 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:19:45.584625 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:19:45.584632 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
	I0111 08:19:45.584641 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
	I0111 08:19:45.584655 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
	I0111 08:19:45.584669 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
	I0111 08:19:45.584682 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
	I0111 08:19:45.584688 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 08:19:45.584725 3354790 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0111 08:19:45.849880 3354790 system_pods.go:86] 8 kube-system pods found
	I0111 08:19:45.849918 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:19:45.849927 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:19:45.849942 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
	I0111 08:19:45.849949 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
	I0111 08:19:45.849954 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
	I0111 08:19:45.849967 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
	I0111 08:19:45.849978 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
	I0111 08:19:45.849985 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 08:19:46.154338 3354790 system_pods.go:86] 8 kube-system pods found
	I0111 08:19:46.154380 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:19:46.154392 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:19:46.154398 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
	I0111 08:19:46.154404 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
	I0111 08:19:46.154410 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
	I0111 08:19:46.154415 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
	I0111 08:19:46.154420 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
	I0111 08:19:46.154425 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 08:19:46.623537 3354790 system_pods.go:86] 8 kube-system pods found
	I0111 08:19:46.623576 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0111 08:19:46.623585 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:19:46.623592 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
	I0111 08:19:46.623597 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
	I0111 08:19:46.623603 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
	I0111 08:19:46.623653 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
	I0111 08:19:46.623659 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
	I0111 08:19:46.623665 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0111 08:19:47.175907 3354790 system_pods.go:86] 8 kube-system pods found
	I0111 08:19:47.175954 3354790 system_pods.go:89] "coredns-7d764666f9-xpszs" [34169519-e874-4eda-bdac-d6247227597a] Running
	I0111 08:19:47.175967 3354790 system_pods.go:89] "etcd-embed-certs-239792" [608377ab-2e76-46ee-b306-20d4b71e6efc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0111 08:19:47.175973 3354790 system_pods.go:89] "kindnet-f6k98" [68ea960a-d14e-4e30-823c-e4764d383a22] Running
	I0111 08:19:47.175982 3354790 system_pods.go:89] "kube-apiserver-embed-certs-239792" [e09d8841-f87c-4d9e-8977-748dd580c23f] Running
	I0111 08:19:47.175990 3354790 system_pods.go:89] "kube-controller-manager-embed-certs-239792" [9bc420e6-abcf-4ad7-a173-be1053698174] Running
	I0111 08:19:47.175995 3354790 system_pods.go:89] "kube-proxy-8tlw4" [257d9528-3cd8-4f07-8184-767ac8431913] Running
	I0111 08:19:47.176000 3354790 system_pods.go:89] "kube-scheduler-embed-certs-239792" [3e508900-d6e4-4bb4-bd7a-d43b5af01487] Running
	I0111 08:19:47.176006 3354790 system_pods.go:89] "storage-provisioner" [ffeb23ad-8030-4bd1-9686-a4a1f9d4ba48] Running
	I0111 08:19:47.176019 3354790 system_pods.go:126] duration metric: took 1.602374511s to wait for k8s-apps to be running ...
	I0111 08:19:47.176030 3354790 system_svc.go:44] waiting for kubelet service to be running ....
	I0111 08:19:47.176090 3354790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:19:47.189104 3354790 system_svc.go:56] duration metric: took 13.062731ms WaitForService to wait for kubelet
	I0111 08:19:47.189173 3354790 kubeadm.go:587] duration metric: took 14.501396628s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0111 08:19:47.189201 3354790 node_conditions.go:102] verifying NodePressure condition ...
	I0111 08:19:47.192452 3354790 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0111 08:19:47.192487 3354790 node_conditions.go:123] node cpu capacity is 2
	I0111 08:19:47.192501 3354790 node_conditions.go:105] duration metric: took 3.293604ms to run NodePressure ...
	I0111 08:19:47.192513 3354790 start.go:242] waiting for startup goroutines ...
	I0111 08:19:47.192521 3354790 start.go:247] waiting for cluster config update ...
	I0111 08:19:47.192532 3354790 start.go:256] writing updated cluster config ...
	I0111 08:19:47.192844 3354790 ssh_runner.go:195] Run: rm -f paused
	I0111 08:19:47.196266 3354790 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 08:19:47.199715 3354790 pod_ready.go:83] waiting for pod "coredns-7d764666f9-xpszs" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:47.204562 3354790 pod_ready.go:94] pod "coredns-7d764666f9-xpszs" is "Ready"
	I0111 08:19:47.204593 3354790 pod_ready.go:86] duration metric: took 4.838128ms for pod "coredns-7d764666f9-xpszs" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:47.206860 3354790 pod_ready.go:83] waiting for pod "etcd-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:47.712565 3354790 pod_ready.go:94] pod "etcd-embed-certs-239792" is "Ready"
	I0111 08:19:47.712594 3354790 pod_ready.go:86] duration metric: took 505.707766ms for pod "etcd-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:47.715109 3354790 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:47.719762 3354790 pod_ready.go:94] pod "kube-apiserver-embed-certs-239792" is "Ready"
	I0111 08:19:47.719833 3354790 pod_ready.go:86] duration metric: took 4.698357ms for pod "kube-apiserver-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:47.722413 3354790 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:48.001867 3354790 pod_ready.go:94] pod "kube-controller-manager-embed-certs-239792" is "Ready"
	I0111 08:19:48.001895 3354790 pod_ready.go:86] duration metric: took 279.417712ms for pod "kube-controller-manager-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:48.201376 3354790 pod_ready.go:83] waiting for pod "kube-proxy-8tlw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:48.600582 3354790 pod_ready.go:94] pod "kube-proxy-8tlw4" is "Ready"
	I0111 08:19:48.600610 3354790 pod_ready.go:86] duration metric: took 399.202398ms for pod "kube-proxy-8tlw4" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:48.800834 3354790 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:49.200339 3354790 pod_ready.go:94] pod "kube-scheduler-embed-certs-239792" is "Ready"
	I0111 08:19:49.200367 3354790 pod_ready.go:86] duration metric: took 399.502379ms for pod "kube-scheduler-embed-certs-239792" in "kube-system" namespace to be "Ready" or be gone ...
	I0111 08:19:49.200379 3354790 pod_ready.go:40] duration metric: took 2.004053074s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0111 08:19:49.257862 3354790 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0111 08:19:49.261264 3354790 out.go:203] 
	W0111 08:19:49.264163 3354790 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0111 08:19:49.267031 3354790 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0111 08:19:49.270127 3354790 out.go:179] * Done! kubectl is now configured to use "embed-certs-239792" cluster and "default" namespace by default
	I0111 08:20:06.987125 3329885 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001117732s
	I0111 08:20:06.987156 3329885 kubeadm.go:319] 
	I0111 08:20:06.987538 3329885 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:20:06.987650 3329885 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:20:06.987914 3329885 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:20:06.987923 3329885 kubeadm.go:319] 
	I0111 08:20:06.988383 3329885 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:20:06.988449 3329885 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:20:06.988619 3329885 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:20:06.988627 3329885 kubeadm.go:319] 
	I0111 08:20:06.994037 3329885 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:20:06.994459 3329885 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:20:06.994571 3329885 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:20:06.994810 3329885 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:20:06.994819 3329885 kubeadm.go:319] 
	I0111 08:20:06.994887 3329885 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:20:06.994944 3329885 kubeadm.go:403] duration metric: took 8m7.611643479s to StartCluster
	I0111 08:20:06.994981 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0111 08:20:06.995043 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 08:20:07.022617 3329885 cri.go:96] found id: ""
	I0111 08:20:07.022699 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.022717 3329885 logs.go:284] No container was found matching "kube-apiserver"
	I0111 08:20:07.022724 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0111 08:20:07.022804 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 08:20:07.050589 3329885 cri.go:96] found id: ""
	I0111 08:20:07.050614 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.050623 3329885 logs.go:284] No container was found matching "etcd"
	I0111 08:20:07.050629 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0111 08:20:07.050713 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 08:20:07.076582 3329885 cri.go:96] found id: ""
	I0111 08:20:07.076608 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.076618 3329885 logs.go:284] No container was found matching "coredns"
	I0111 08:20:07.076625 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0111 08:20:07.076719 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 08:20:07.103212 3329885 cri.go:96] found id: ""
	I0111 08:20:07.103238 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.103247 3329885 logs.go:284] No container was found matching "kube-scheduler"
	I0111 08:20:07.103254 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0111 08:20:07.103318 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 08:20:07.129632 3329885 cri.go:96] found id: ""
	I0111 08:20:07.129709 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.129733 3329885 logs.go:284] No container was found matching "kube-proxy"
	I0111 08:20:07.129744 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 08:20:07.129817 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 08:20:07.155361 3329885 cri.go:96] found id: ""
	I0111 08:20:07.155388 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.155397 3329885 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 08:20:07.155404 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0111 08:20:07.155466 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 08:20:07.180698 3329885 cri.go:96] found id: ""
	I0111 08:20:07.180793 3329885 logs.go:282] 0 containers: []
	W0111 08:20:07.180810 3329885 logs.go:284] No container was found matching "kindnet"
	I0111 08:20:07.180822 3329885 logs.go:123] Gathering logs for kubelet ...
	I0111 08:20:07.180834 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 08:20:07.237611 3329885 logs.go:123] Gathering logs for dmesg ...
	I0111 08:20:07.237644 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 08:20:07.252588 3329885 logs.go:123] Gathering logs for describe nodes ...
	I0111 08:20:07.252615 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 08:20:07.317178 3329885 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:20:07.309153    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.309712    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.311194    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.311610    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.313064    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 08:20:07.309153    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.309712    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.311194    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.311610    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:07.313064    4820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 08:20:07.317197 3329885 logs.go:123] Gathering logs for containerd ...
	I0111 08:20:07.317211 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0111 08:20:07.357000 3329885 logs.go:123] Gathering logs for container status ...
	I0111 08:20:07.357043 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 08:20:07.386881 3329885 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 08:20:07.386931 3329885 out.go:285] * 
	W0111 08:20:07.386981 3329885 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:20:07.386998 3329885 out.go:285] * 
	W0111 08:20:07.387249 3329885 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:20:07.394169 3329885 out.go:203] 
	W0111 08:20:07.397066 3329885 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001117732s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:20:07.397111 3329885 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 08:20:07.397136 3329885 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 08:20:07.400215 3329885 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106043698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106059312Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106107130Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106123294Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106140902Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106151281Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106160692Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106179998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106198976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106229408Z" level=info msg="Connect containerd service"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.106524622Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.107079858Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.127986290Z" level=info msg="Start subscribing containerd event"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.128066936Z" level=info msg="Start recovering state"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.128760385Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.128973590Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164185276Z" level=info msg="Start event monitor"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164408270Z" level=info msg="Start cni network conf syncer for default"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164484469Z" level=info msg="Start streaming server"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164546376Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164601825Z" level=info msg="runtime interface starting up..."
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164657019Z" level=info msg="starting plugins..."
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.164718121Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 11 08:11:57 force-systemd-flag-610060 systemd[1]: Started containerd.service - containerd container runtime.
	Jan 11 08:11:57 force-systemd-flag-610060 containerd[757]: time="2026-01-11T08:11:57.166571568Z" level=info msg="containerd successfully booted in 0.081008s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:20:08.752548    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:08.753113    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:08.754736    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:08.755263    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:20:08.757023    4950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan11 07:19] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jan11 07:25] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 08:20:08 up 14:02,  0 user,  load average: 1.81, 1.90, 2.03
	Linux force-systemd-flag-610060 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 11 08:20:05 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 317.
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:20:06 force-systemd-flag-610060 kubelet[4745]: E0111 08:20:06.092362    4745 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:20:06 force-systemd-flag-610060 kubelet[4750]: E0111 08:20:06.839632    4750 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:20:06 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:20:07 force-systemd-flag-610060 kubelet[4838]: E0111 08:20:07.612139    4838 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:20:07 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:20:08 force-systemd-flag-610060 kubelet[4866]: E0111 08:20:08.354429    4866 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:20:08 force-systemd-flag-610060 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-610060 -n force-systemd-flag-610060
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-610060 -n force-systemd-flag-610060: exit status 6 (334.83814ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:20:09.200499 3359135 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-610060" does not appear in /home/jenkins/minikube-integration/22402-3122619/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-610060" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-610060" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-610060
E0111 08:20:10.007247 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-610060: (1.969947542s)
--- FAIL: TestForceSystemdFlag (505.26s)

                                                
                                    
x
+
TestForceSystemdEnv (506.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-305397 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-305397 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: exit status 109 (8m22.47202867s)

                                                
                                                
-- stdout --
	* [force-systemd-env-305397] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-305397" primary control-plane node in "force-systemd-env-305397" cluster
	* Pulling base image v0.0.48-1768032998-22402 ...
	* Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:05:07.344273 3308660 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:05:07.344601 3308660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:05:07.344627 3308660 out.go:374] Setting ErrFile to fd 2...
	I0111 08:05:07.344649 3308660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:05:07.348788 3308660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 08:05:07.349515 3308660 out.go:368] Setting JSON to false
	I0111 08:05:07.350397 3308660 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":49659,"bootTime":1768069049,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 08:05:07.350498 3308660 start.go:143] virtualization:  
	I0111 08:05:07.355014 3308660 out.go:179] * [force-systemd-env-305397] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:05:07.358565 3308660 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:05:07.358796 3308660 notify.go:221] Checking for updates...
	I0111 08:05:07.365387 3308660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:05:07.368704 3308660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 08:05:07.371978 3308660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 08:05:07.375262 3308660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:05:07.378625 3308660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I0111 08:05:07.382266 3308660 config.go:182] Loaded profile config "test-preload-819303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:05:07.382402 3308660 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:05:07.431635 3308660 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:05:07.431776 3308660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:05:07.519393 3308660 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-11 08:05:07.509377359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:05:07.519498 3308660 docker.go:319] overlay module found
	I0111 08:05:07.524718 3308660 out.go:179] * Using the docker driver based on user configuration
	I0111 08:05:07.527733 3308660 start.go:309] selected driver: docker
	I0111 08:05:07.527754 3308660 start.go:928] validating driver "docker" against <nil>
	I0111 08:05:07.527769 3308660 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:05:07.528710 3308660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:05:07.629793 3308660 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-11 08:05:07.619998704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:05:07.629947 3308660 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:05:07.630172 3308660 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:05:07.633370 3308660 out.go:179] * Using Docker driver with root privileges
	I0111 08:05:07.636371 3308660 cni.go:84] Creating CNI manager for ""
	I0111 08:05:07.636444 3308660 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:05:07.636460 3308660 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:05:07.636538 3308660 start.go:353] cluster config:
	{Name:force-systemd-env-305397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-305397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:05:07.639712 3308660 out.go:179] * Starting "force-systemd-env-305397" primary control-plane node in "force-systemd-env-305397" cluster
	I0111 08:05:07.642711 3308660 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0111 08:05:07.645757 3308660 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:05:07.648598 3308660 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:05:07.648652 3308660 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I0111 08:05:07.648663 3308660 cache.go:65] Caching tarball of preloaded images
	I0111 08:05:07.648753 3308660 preload.go:251] Found /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0111 08:05:07.648768 3308660 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I0111 08:05:07.648881 3308660 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/config.json ...
	I0111 08:05:07.648905 3308660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/config.json: {Name:mkb55171981c8f214d162f97336bec9b21389b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:05:07.649062 3308660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:05:07.673061 3308660 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:05:07.673081 3308660 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:05:07.673097 3308660 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:05:07.673126 3308660 start.go:360] acquireMachinesLock for force-systemd-env-305397: {Name:mk24e458890b138ead5dc16158ff91b3c944d015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:05:07.673234 3308660 start.go:364] duration metric: took 93.208µs to acquireMachinesLock for "force-systemd-env-305397"
	I0111 08:05:07.673258 3308660 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-305397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-305397 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0111 08:05:07.673330 3308660 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:05:07.677774 3308660 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:05:07.678029 3308660 start.go:159] libmachine.API.Create for "force-systemd-env-305397" (driver="docker")
	I0111 08:05:07.678059 3308660 client.go:173] LocalClient.Create starting
	I0111 08:05:07.678128 3308660 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem
	I0111 08:05:07.678160 3308660 main.go:144] libmachine: Decoding PEM data...
	I0111 08:05:07.678179 3308660 main.go:144] libmachine: Parsing certificate...
	I0111 08:05:07.678234 3308660 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem
	I0111 08:05:07.678252 3308660 main.go:144] libmachine: Decoding PEM data...
	I0111 08:05:07.678263 3308660 main.go:144] libmachine: Parsing certificate...
	I0111 08:05:07.678648 3308660 cli_runner.go:164] Run: docker network inspect force-systemd-env-305397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:05:07.696665 3308660 cli_runner.go:211] docker network inspect force-systemd-env-305397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:05:07.696744 3308660 network_create.go:284] running [docker network inspect force-systemd-env-305397] to gather additional debugging logs...
	I0111 08:05:07.696761 3308660 cli_runner.go:164] Run: docker network inspect force-systemd-env-305397
	W0111 08:05:07.716642 3308660 cli_runner.go:211] docker network inspect force-systemd-env-305397 returned with exit code 1
	I0111 08:05:07.716676 3308660 network_create.go:287] error running [docker network inspect force-systemd-env-305397]: docker network inspect force-systemd-env-305397: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-305397 not found
	I0111 08:05:07.716689 3308660 network_create.go:289] output of [docker network inspect force-systemd-env-305397]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-305397 not found
	
	** /stderr **
	I0111 08:05:07.716814 3308660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:05:07.737926 3308660 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d6a2604bb10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:cd:63:f9:b2:f8} reservation:<nil>}
	I0111 08:05:07.738289 3308660 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cec031213447 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:71:bf:56:ac:cb} reservation:<nil>}
	I0111 08:05:07.738499 3308660 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0e2d137ca1da IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:68:81:9e:35:63} reservation:<nil>}
	I0111 08:05:07.738903 3308660 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3c420}
	I0111 08:05:07.738920 3308660 network_create.go:124] attempt to create docker network force-systemd-env-305397 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0111 08:05:07.738979 3308660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-305397 force-systemd-env-305397
	I0111 08:05:07.814650 3308660 network_create.go:108] docker network force-systemd-env-305397 192.168.76.0/24 created
	I0111 08:05:07.814696 3308660 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-305397" container
	I0111 08:05:07.814777 3308660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:05:07.843400 3308660 cli_runner.go:164] Run: docker volume create force-systemd-env-305397 --label name.minikube.sigs.k8s.io=force-systemd-env-305397 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:05:07.861049 3308660 oci.go:103] Successfully created a docker volume force-systemd-env-305397
	I0111 08:05:07.861147 3308660 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-305397-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-305397 --entrypoint /usr/bin/test -v force-systemd-env-305397:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:05:08.427820 3308660 oci.go:107] Successfully prepared a docker volume force-systemd-env-305397
	I0111 08:05:08.427883 3308660 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:05:08.427893 3308660 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:05:08.427958 3308660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-305397:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:05:13.711638 3308660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-305397:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (5.283633318s)
	I0111 08:05:13.711671 3308660 kic.go:203] duration metric: took 5.28377455s to extract preloaded images to volume ...
	W0111 08:05:13.711809 3308660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:05:13.711934 3308660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:05:13.768230 3308660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-305397 --name force-systemd-env-305397 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-305397 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-305397 --network force-systemd-env-305397 --ip 192.168.76.2 --volume force-systemd-env-305397:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:05:14.134009 3308660 cli_runner.go:164] Run: docker container inspect force-systemd-env-305397 --format={{.State.Running}}
	I0111 08:05:14.160612 3308660 cli_runner.go:164] Run: docker container inspect force-systemd-env-305397 --format={{.State.Status}}
	I0111 08:05:14.190443 3308660 cli_runner.go:164] Run: docker exec force-systemd-env-305397 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:05:14.262377 3308660 oci.go:144] the created container "force-systemd-env-305397" has a running status.
	I0111 08:05:14.262415 3308660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-env-305397/id_rsa...
	I0111 08:05:14.294831 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-env-305397/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:05:14.295017 3308660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-env-305397/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:05:14.324741 3308660 cli_runner.go:164] Run: docker container inspect force-systemd-env-305397 --format={{.State.Status}}
	I0111 08:05:14.355638 3308660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:05:14.355664 3308660 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-305397 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:05:14.415419 3308660 cli_runner.go:164] Run: docker container inspect force-systemd-env-305397 --format={{.State.Status}}
	I0111 08:05:14.440127 3308660 machine.go:94] provisionDockerMachine start ...
	I0111 08:05:14.440223 3308660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-305397
	I0111 08:05:14.479518 3308660 main.go:144] libmachine: Using SSH client type: native
	I0111 08:05:14.479895 3308660 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35783 <nil> <nil>}
	I0111 08:05:14.479905 3308660 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:05:14.480518 3308660 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 08:05:17.636499 3308660 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-305397
	
	I0111 08:05:17.636575 3308660 ubuntu.go:182] provisioning hostname "force-systemd-env-305397"
	I0111 08:05:17.636681 3308660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-305397
	I0111 08:05:17.659267 3308660 main.go:144] libmachine: Using SSH client type: native
	I0111 08:05:17.659575 3308660 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35783 <nil> <nil>}
	I0111 08:05:17.659586 3308660 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-305397 && echo "force-systemd-env-305397" | sudo tee /etc/hostname
	I0111 08:05:17.838438 3308660 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-305397
	
	I0111 08:05:17.838539 3308660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-305397
	I0111 08:05:17.855869 3308660 main.go:144] libmachine: Using SSH client type: native
	I0111 08:05:17.856188 3308660 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35783 <nil> <nil>}
	I0111 08:05:17.856211 3308660 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-305397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-305397/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-305397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:05:18.007685 3308660 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:05:18.007713 3308660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-3122619/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-3122619/.minikube}
	I0111 08:05:18.007736 3308660 ubuntu.go:190] setting up certificates
	I0111 08:05:18.007746 3308660 provision.go:84] configureAuth start
	I0111 08:05:18.007815 3308660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-305397
	I0111 08:05:18.037893 3308660 provision.go:143] copyHostCerts
	I0111 08:05:18.037947 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
	I0111 08:05:18.037982 3308660 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem, removing ...
	I0111 08:05:18.037988 3308660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
	I0111 08:05:18.038067 3308660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem (1078 bytes)
	I0111 08:05:18.038144 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
	I0111 08:05:18.038161 3308660 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem, removing ...
	I0111 08:05:18.038165 3308660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
	I0111 08:05:18.038193 3308660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem (1123 bytes)
	I0111 08:05:18.038232 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
	I0111 08:05:18.038252 3308660 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem, removing ...
	I0111 08:05:18.038257 3308660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
	I0111 08:05:18.038280 3308660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem (1675 bytes)
	I0111 08:05:18.038324 3308660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-305397 san=[127.0.0.1 192.168.76.2 force-systemd-env-305397 localhost minikube]
	I0111 08:05:18.233175 3308660 provision.go:177] copyRemoteCerts
	I0111 08:05:18.233300 3308660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:05:18.233361 3308660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-305397
	I0111 08:05:18.253513 3308660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35783 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-env-305397/id_rsa Username:docker}
	I0111 08:05:18.365215 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:05:18.365276 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 08:05:18.386161 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:05:18.386235 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:05:18.408086 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:05:18.408179 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0111 08:05:18.429272 3308660 provision.go:87] duration metric: took 421.503464ms to configureAuth
	I0111 08:05:18.429300 3308660 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:05:18.429514 3308660 config.go:182] Loaded profile config "force-systemd-env-305397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:05:18.429530 3308660 machine.go:97] duration metric: took 3.989385001s to provisionDockerMachine
	I0111 08:05:18.429550 3308660 client.go:176] duration metric: took 10.751474132s to LocalClient.Create
	I0111 08:05:18.429573 3308660 start.go:167] duration metric: took 10.751545128s to libmachine.API.Create "force-systemd-env-305397"
	I0111 08:05:18.429584 3308660 start.go:293] postStartSetup for "force-systemd-env-305397" (driver="docker")
	I0111 08:05:18.429594 3308660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:05:18.429661 3308660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:05:18.429722 3308660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-305397
	I0111 08:05:18.448891 3308660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35783 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-env-305397/id_rsa Username:docker}
	I0111 08:05:18.557311 3308660 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:05:18.561229 3308660 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:05:18.561260 3308660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:05:18.561271 3308660 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/addons for local assets ...
	I0111 08:05:18.561323 3308660 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/files for local assets ...
	I0111 08:05:18.561411 3308660 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> 31244842.pem in /etc/ssl/certs
	I0111 08:05:18.561423 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> /etc/ssl/certs/31244842.pem
	I0111 08:05:18.561531 3308660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:05:18.570008 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /etc/ssl/certs/31244842.pem (1708 bytes)
	I0111 08:05:18.594245 3308660 start.go:296] duration metric: took 164.64672ms for postStartSetup
	I0111 08:05:18.594673 3308660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-305397
	I0111 08:05:18.621928 3308660 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/config.json ...
	I0111 08:05:18.622204 3308660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:05:18.622255 3308660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-305397
	I0111 08:05:18.646520 3308660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35783 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-env-305397/id_rsa Username:docker}
	I0111 08:05:18.758446 3308660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:05:18.764239 3308660 start.go:128] duration metric: took 11.090894521s to createHost
	I0111 08:05:18.764266 3308660 start.go:83] releasing machines lock for "force-systemd-env-305397", held for 11.091023567s
	I0111 08:05:18.764373 3308660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-305397
	I0111 08:05:18.783830 3308660 ssh_runner.go:195] Run: cat /version.json
	I0111 08:05:18.783887 3308660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-305397
	I0111 08:05:18.784132 3308660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:05:18.784191 3308660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-305397
	I0111 08:05:18.820203 3308660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35783 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-env-305397/id_rsa Username:docker}
	I0111 08:05:18.820211 3308660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35783 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-env-305397/id_rsa Username:docker}
	I0111 08:05:19.027239 3308660 ssh_runner.go:195] Run: systemctl --version
	I0111 08:05:19.034969 3308660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:05:19.042328 3308660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:05:19.042408 3308660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:05:19.091220 3308660 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:05:19.091246 3308660 start.go:496] detecting cgroup driver to use...
	I0111 08:05:19.091264 3308660 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:05:19.091345 3308660 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0111 08:05:19.116824 3308660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0111 08:05:19.132837 3308660 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:05:19.132938 3308660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:05:19.152516 3308660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:05:19.173452 3308660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:05:19.324553 3308660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:05:19.475718 3308660 docker.go:234] disabling docker service ...
	I0111 08:05:19.475811 3308660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:05:19.502928 3308660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:05:19.517237 3308660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:05:19.676783 3308660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:05:19.839384 3308660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:05:19.855323 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:05:19.871442 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0111 08:05:19.881505 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0111 08:05:19.890890 3308660 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0111 08:05:19.891003 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0111 08:05:19.900640 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:05:19.910194 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0111 08:05:19.920466 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:05:19.930219 3308660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:05:19.939630 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0111 08:05:19.950615 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0111 08:05:19.960348 3308660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0111 08:05:19.970223 3308660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:05:19.979130 3308660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:05:19.987738 3308660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:05:20.272691 3308660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0111 08:05:20.547628 3308660 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I0111 08:05:20.547702 3308660 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0111 08:05:20.552006 3308660 start.go:574] Will wait 60s for crictl version
	I0111 08:05:20.552072 3308660 ssh_runner.go:195] Run: which crictl
	I0111 08:05:20.555983 3308660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:05:20.597825 3308660 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I0111 08:05:20.597904 3308660 ssh_runner.go:195] Run: containerd --version
	I0111 08:05:20.640686 3308660 ssh_runner.go:195] Run: containerd --version
	I0111 08:05:20.691166 3308660 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I0111 08:05:20.694270 3308660 cli_runner.go:164] Run: docker network inspect force-systemd-env-305397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:05:20.716583 3308660 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 08:05:20.722100 3308660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:05:20.739249 3308660 kubeadm.go:884] updating cluster {Name:force-systemd-env-305397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-305397 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:05:20.739364 3308660 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:05:20.739428 3308660 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:05:20.780874 3308660 containerd.go:635] all images are preloaded for containerd runtime.
	I0111 08:05:20.780895 3308660 containerd.go:542] Images already preloaded, skipping extraction
	I0111 08:05:20.780953 3308660 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:05:20.822504 3308660 containerd.go:635] all images are preloaded for containerd runtime.
	I0111 08:05:20.822526 3308660 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:05:20.822534 3308660 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 containerd true true} ...
	I0111 08:05:20.822623 3308660 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-305397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-305397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:05:20.822692 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I0111 08:05:20.868775 3308660 cni.go:84] Creating CNI manager for ""
	I0111 08:05:20.868804 3308660 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:05:20.868826 3308660 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:05:20.868850 3308660 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-305397 NodeName:force-systemd-env-305397 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:05:20.868977 3308660 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-env-305397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:05:20.869048 3308660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:05:20.879098 3308660 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:05:20.879169 3308660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:05:20.888494 3308660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0111 08:05:20.905218 3308660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:05:20.920769 3308660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2236 bytes)
	I0111 08:05:20.937968 3308660 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:05:20.946322 3308660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:05:20.966402 3308660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:05:21.289068 3308660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:05:21.333488 3308660 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397 for IP: 192.168.76.2
	I0111 08:05:21.333512 3308660 certs.go:195] generating shared ca certs ...
	I0111 08:05:21.333527 3308660 certs.go:227] acquiring lock for ca certs: {Name:mk4f88e5992499f3a8089baf463e3ba7f81a52c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:05:21.333681 3308660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key
	I0111 08:05:21.333729 3308660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key
	I0111 08:05:21.333741 3308660 certs.go:257] generating profile certs ...
	I0111 08:05:21.333800 3308660 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/client.key
	I0111 08:05:21.333816 3308660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/client.crt with IP's: []
	I0111 08:05:21.634581 3308660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/client.crt ...
	I0111 08:05:21.634664 3308660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/client.crt: {Name:mk7d9445932faabb4e7fde9c62a19162dd1483dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:05:21.634900 3308660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/client.key ...
	I0111 08:05:21.634952 3308660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/client.key: {Name:mk36cfbd4b27ff7f26f218a0946910f4060b9f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:05:21.635101 3308660 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.key.7f154461
	I0111 08:05:21.635141 3308660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.crt.7f154461 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0111 08:05:21.781432 3308660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.crt.7f154461 ...
	I0111 08:05:21.781512 3308660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.crt.7f154461: {Name:mke43bec400422d2b0c7e2887b4a1ad8faf3e8c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:05:21.781975 3308660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.key.7f154461 ...
	I0111 08:05:21.782020 3308660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.key.7f154461: {Name:mk2b58fcda01f7f5ba66ed7ec4ea2eb80a0863ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:05:21.782175 3308660 certs.go:382] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.crt.7f154461 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.crt
	I0111 08:05:21.782320 3308660 certs.go:386] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.key.7f154461 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.key
	I0111 08:05:21.782433 3308660 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.key
	I0111 08:05:21.782472 3308660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.crt with IP's: []
	I0111 08:05:22.012200 3308660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.crt ...
	I0111 08:05:22.012293 3308660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.crt: {Name:mkf71c4b6b770904a50a5080919f1c53c8a032ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:05:22.012563 3308660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.key ...
	I0111 08:05:22.012603 3308660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.key: {Name:mkec44c2b5e0b2b94428dd4cba2f2b04aa37e20e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:05:22.012758 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:05:22.012804 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:05:22.012833 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:05:22.012876 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:05:22.012908 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:05:22.012939 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:05:22.012981 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:05:22.013017 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:05:22.013114 3308660 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem (1338 bytes)
	W0111 08:05:22.013176 3308660 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484_empty.pem, impossibly tiny 0 bytes
	I0111 08:05:22.013202 3308660 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 08:05:22.013264 3308660 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:05:22.013320 3308660 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:05:22.013369 3308660 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem (1675 bytes)
	I0111 08:05:22.013455 3308660 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem (1708 bytes)
	I0111 08:05:22.013512 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem -> /usr/share/ca-certificates/3124484.pem
	I0111 08:05:22.013541 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> /usr/share/ca-certificates/31244842.pem
	I0111 08:05:22.013578 3308660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:05:22.014192 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:05:22.032653 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 08:05:22.054514 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:05:22.072144 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:05:22.090269 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:05:22.109645 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:05:22.128047 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:05:22.147369 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-env-305397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:05:22.166909 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem --> /usr/share/ca-certificates/3124484.pem (1338 bytes)
	I0111 08:05:22.186575 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /usr/share/ca-certificates/31244842.pem (1708 bytes)
	I0111 08:05:22.205519 3308660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:05:22.224190 3308660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:05:22.238279 3308660 ssh_runner.go:195] Run: openssl version
	I0111 08:05:22.244768 3308660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:05:22.252906 3308660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:05:22.260790 3308660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:05:22.264889 3308660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:26 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:05:22.264956 3308660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:05:22.328512 3308660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:05:22.338781 3308660 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:05:22.356138 3308660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3124484.pem
	I0111 08:05:22.368743 3308660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3124484.pem /etc/ssl/certs/3124484.pem
	I0111 08:05:22.380424 3308660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3124484.pem
	I0111 08:05:22.387428 3308660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:32 /usr/share/ca-certificates/3124484.pem
	I0111 08:05:22.387494 3308660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3124484.pem
	I0111 08:05:22.430510 3308660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:05:22.438376 3308660 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3124484.pem /etc/ssl/certs/51391683.0
	I0111 08:05:22.445624 3308660 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/31244842.pem
	I0111 08:05:22.452886 3308660 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/31244842.pem /etc/ssl/certs/31244842.pem
	I0111 08:05:22.460372 3308660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31244842.pem
	I0111 08:05:22.464119 3308660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:32 /usr/share/ca-certificates/31244842.pem
	I0111 08:05:22.464188 3308660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31244842.pem
	I0111 08:05:22.505111 3308660 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:05:22.512477 3308660 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/31244842.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:05:22.520779 3308660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:05:22.525596 3308660 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:05:22.525646 3308660 kubeadm.go:401] StartCluster: {Name:force-systemd-env-305397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-305397 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:05:22.525709 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0111 08:05:22.525777 3308660 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:05:22.594510 3308660 cri.go:96] found id: ""
	I0111 08:05:22.594582 3308660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:05:22.608730 3308660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:05:22.617410 3308660 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:05:22.617471 3308660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:05:22.627384 3308660 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:05:22.627401 3308660 kubeadm.go:158] found existing configuration files:
	
	I0111 08:05:22.627453 3308660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:05:22.636187 3308660 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:05:22.636244 3308660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:05:22.644887 3308660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:05:22.653983 3308660 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:05:22.654053 3308660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:05:22.663050 3308660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:05:22.670834 3308660 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:05:22.670899 3308660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:05:22.678735 3308660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:05:22.688848 3308660 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:05:22.688924 3308660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:05:22.697751 3308660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:05:22.758593 3308660 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:05:22.760066 3308660 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:05:22.851209 3308660 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:05:22.851344 3308660 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:05:22.851383 3308660 kubeadm.go:319] OS: Linux
	I0111 08:05:22.851461 3308660 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:05:22.851536 3308660 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:05:22.851620 3308660 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:05:22.851698 3308660 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:05:22.851775 3308660 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:05:22.851857 3308660 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:05:22.851931 3308660 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:05:22.852013 3308660 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:05:22.852087 3308660 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:05:22.933665 3308660 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:05:22.933774 3308660 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:05:22.933865 3308660 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:05:22.944832 3308660 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:05:22.950031 3308660 out.go:252]   - Generating certificates and keys ...
	I0111 08:05:22.950185 3308660 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:05:22.950278 3308660 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:05:23.132133 3308660 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:05:23.254833 3308660 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:05:23.434536 3308660 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:05:23.551793 3308660 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:05:23.936608 3308660 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:05:23.936753 3308660 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-305397 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:05:24.242878 3308660 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:05:24.243218 3308660 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-305397 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:05:24.467175 3308660 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:05:24.774384 3308660 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:05:24.921664 3308660 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:05:24.922011 3308660 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:05:25.304129 3308660 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:05:25.637404 3308660 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:05:25.970088 3308660 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:05:26.451843 3308660 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:05:26.603411 3308660 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:05:26.604221 3308660 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:05:26.616642 3308660 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:05:26.620328 3308660 out.go:252]   - Booting up control plane ...
	I0111 08:05:26.620453 3308660 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:05:26.620536 3308660 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:05:26.620621 3308660 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:05:26.635868 3308660 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:05:26.636227 3308660 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:05:26.650419 3308660 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:05:26.650525 3308660 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:05:26.650565 3308660 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:05:26.814960 3308660 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:05:26.815081 3308660 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:09:26.815154 3308660 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001259775s
	I0111 08:09:26.820466 3308660 kubeadm.go:319] 
	I0111 08:09:26.820543 3308660 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:09:26.820577 3308660 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:09:26.820681 3308660 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:09:26.820686 3308660 kubeadm.go:319] 
	I0111 08:09:26.820790 3308660 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:09:26.820822 3308660 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:09:26.820852 3308660 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:09:26.820856 3308660 kubeadm.go:319] 
	I0111 08:09:26.825336 3308660 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:09:26.825759 3308660 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:09:26.825871 3308660 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:09:26.826117 3308660 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:09:26.826127 3308660 kubeadm.go:319] 
	I0111 08:09:26.826194 3308660 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0111 08:09:26.826339 3308660 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-305397 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-305397 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001259775s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-305397 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-305397 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001259775s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 08:09:26.826423 3308660 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0111 08:09:27.253506 3308660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:09:27.268851 3308660 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:09:27.268940 3308660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:09:27.278299 3308660 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:09:27.278322 3308660 kubeadm.go:158] found existing configuration files:
	
	I0111 08:09:27.278387 3308660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:09:27.286627 3308660 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:09:27.286708 3308660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:09:27.294460 3308660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:09:27.302392 3308660 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:09:27.302466 3308660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:09:27.315524 3308660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:09:27.324787 3308660 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:09:27.324916 3308660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:09:27.334051 3308660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:09:27.343342 3308660 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:09:27.343450 3308660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:09:27.351405 3308660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:09:27.390601 3308660 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:09:27.390663 3308660 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:09:27.469196 3308660 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:09:27.469279 3308660 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:09:27.469317 3308660 kubeadm.go:319] OS: Linux
	I0111 08:09:27.469366 3308660 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:09:27.469418 3308660 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:09:27.469468 3308660 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:09:27.469520 3308660 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:09:27.469584 3308660 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:09:27.469643 3308660 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:09:27.469691 3308660 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:09:27.469747 3308660 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:09:27.469803 3308660 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:09:27.543489 3308660 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:09:27.543616 3308660 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:09:27.543741 3308660 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:09:27.552855 3308660 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:09:27.555757 3308660 out.go:252]   - Generating certificates and keys ...
	I0111 08:09:27.555875 3308660 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:09:27.555956 3308660 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:09:27.556047 3308660 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 08:09:27.556120 3308660 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 08:09:27.556212 3308660 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 08:09:27.556337 3308660 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 08:09:27.556423 3308660 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 08:09:27.556495 3308660 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 08:09:27.556576 3308660 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 08:09:27.556655 3308660 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 08:09:27.556701 3308660 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 08:09:27.556763 3308660 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:09:27.661376 3308660 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:09:27.898327 3308660 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:09:28.378405 3308660 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:09:28.544514 3308660 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:09:29.132579 3308660 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:09:29.133127 3308660 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:09:29.136250 3308660 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:09:29.139282 3308660 out.go:252]   - Booting up control plane ...
	I0111 08:09:29.139385 3308660 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:09:29.139463 3308660 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:09:29.140694 3308660 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:09:29.162111 3308660 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:09:29.162225 3308660 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:09:29.170413 3308660 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:09:29.170761 3308660 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:09:29.170809 3308660 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:09:29.328774 3308660 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:09:29.328916 3308660 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:13:29.329137 3308660 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000911202s
	I0111 08:13:29.329164 3308660 kubeadm.go:319] 
	I0111 08:13:29.329222 3308660 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:13:29.329256 3308660 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:13:29.329360 3308660 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:13:29.329365 3308660 kubeadm.go:319] 
	I0111 08:13:29.329469 3308660 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:13:29.329502 3308660 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:13:29.329558 3308660 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:13:29.329564 3308660 kubeadm.go:319] 
	I0111 08:13:29.336690 3308660 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:13:29.337118 3308660 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:13:29.337230 3308660 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:13:29.337486 3308660 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:13:29.337495 3308660 kubeadm.go:319] 
	I0111 08:13:29.337570 3308660 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:13:29.337634 3308660 kubeadm.go:403] duration metric: took 8m6.811988233s to StartCluster
	I0111 08:13:29.337685 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0111 08:13:29.337753 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 08:13:29.362621 3308660 cri.go:96] found id: ""
	I0111 08:13:29.362671 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.362712 3308660 logs.go:284] No container was found matching "kube-apiserver"
	I0111 08:13:29.362721 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0111 08:13:29.362797 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 08:13:29.388483 3308660 cri.go:96] found id: ""
	I0111 08:13:29.388509 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.388518 3308660 logs.go:284] No container was found matching "etcd"
	I0111 08:13:29.388524 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0111 08:13:29.388583 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 08:13:29.414014 3308660 cri.go:96] found id: ""
	I0111 08:13:29.414039 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.414048 3308660 logs.go:284] No container was found matching "coredns"
	I0111 08:13:29.414054 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0111 08:13:29.414115 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 08:13:29.439048 3308660 cri.go:96] found id: ""
	I0111 08:13:29.439073 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.439081 3308660 logs.go:284] No container was found matching "kube-scheduler"
	I0111 08:13:29.439088 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0111 08:13:29.439147 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 08:13:29.466496 3308660 cri.go:96] found id: ""
	I0111 08:13:29.466520 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.466529 3308660 logs.go:284] No container was found matching "kube-proxy"
	I0111 08:13:29.466536 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 08:13:29.466613 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 08:13:29.491417 3308660 cri.go:96] found id: ""
	I0111 08:13:29.491443 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.491465 3308660 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 08:13:29.491472 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0111 08:13:29.491530 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 08:13:29.515542 3308660 cri.go:96] found id: ""
	I0111 08:13:29.515567 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.515576 3308660 logs.go:284] No container was found matching "kindnet"
	I0111 08:13:29.515585 3308660 logs.go:123] Gathering logs for kubelet ...
	I0111 08:13:29.515596 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 08:13:29.574593 3308660 logs.go:123] Gathering logs for dmesg ...
	I0111 08:13:29.574630 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 08:13:29.591579 3308660 logs.go:123] Gathering logs for describe nodes ...
	I0111 08:13:29.591607 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 08:13:29.666942 3308660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:13:29.658562    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.659168    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.660918    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.661530    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.663019    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 08:13:29.658562    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.659168    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.660918    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.661530    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.663019    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 08:13:29.667008 3308660 logs.go:123] Gathering logs for containerd ...
	I0111 08:13:29.667028 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0111 08:13:29.706334 3308660 logs.go:123] Gathering logs for container status ...
	I0111 08:13:29.706368 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 08:13:29.737417 3308660 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000911202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 08:13:29.737480 3308660 out.go:285] * 
	* 
	W0111 08:13:29.737571 3308660 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000911202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000911202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:13:29.737588 3308660 out.go:285] * 
	* 
	W0111 08:13:29.737895 3308660 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:13:29.744161 3308660 out.go:203] 
	W0111 08:13:29.747235 3308660 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000911202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000911202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:13:29.747307 3308660 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 08:13:29.747331 3308660 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 08:13:29.750573 3308660 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-305397 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd" : exit status 109
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-305397 ssh "cat /etc/containerd/config.toml"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2026-01-11 08:13:30.195540245 +0000 UTC m=+2868.229838817
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-305397
helpers_test.go:244: (dbg) docker inspect force-systemd-env-305397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e45f405a9e11778c24183bf1fd8c13226c9d7aa4826875fb4b0f5b7cd792d212",
	        "Created": "2026-01-11T08:05:13.783246845Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3309560,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:05:13.863830754Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/e45f405a9e11778c24183bf1fd8c13226c9d7aa4826875fb4b0f5b7cd792d212/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e45f405a9e11778c24183bf1fd8c13226c9d7aa4826875fb4b0f5b7cd792d212/hostname",
	        "HostsPath": "/var/lib/docker/containers/e45f405a9e11778c24183bf1fd8c13226c9d7aa4826875fb4b0f5b7cd792d212/hosts",
	        "LogPath": "/var/lib/docker/containers/e45f405a9e11778c24183bf1fd8c13226c9d7aa4826875fb4b0f5b7cd792d212/e45f405a9e11778c24183bf1fd8c13226c9d7aa4826875fb4b0f5b7cd792d212-json.log",
	        "Name": "/force-systemd-env-305397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-305397:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-305397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e45f405a9e11778c24183bf1fd8c13226c9d7aa4826875fb4b0f5b7cd792d212",
	                "LowerDir": "/var/lib/docker/overlay2/962c0b60e1cd75b170fcbf48fb614a01ed7a2db7b7b5a0200c052e1c2cf6d663-init/diff:/var/lib/docker/overlay2/df463cec8bfb6e167fe65d2de959d2835d839df5d29dad0284e7abf6afbac443/diff",
	                "MergedDir": "/var/lib/docker/overlay2/962c0b60e1cd75b170fcbf48fb614a01ed7a2db7b7b5a0200c052e1c2cf6d663/merged",
	                "UpperDir": "/var/lib/docker/overlay2/962c0b60e1cd75b170fcbf48fb614a01ed7a2db7b7b5a0200c052e1c2cf6d663/diff",
	                "WorkDir": "/var/lib/docker/overlay2/962c0b60e1cd75b170fcbf48fb614a01ed7a2db7b7b5a0200c052e1c2cf6d663/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-305397",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-305397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-305397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-305397",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-305397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0a9fd32dc74556fc9f21967fb3aca809dd50579419b91b5fc041d7b751641858",
	            "SandboxKey": "/var/run/docker/netns/0a9fd32dc745",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-305397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:a8:cd:d6:3b:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9455289443b510afb71c749696052dce431e614753a756dd940936fb407ff777",
	                    "EndpointID": "afa4516d5f2994fb2ae4bfcd798af90d23cd6a8d555ae66028653b9f8050d6ac",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-305397",
	                        "e45f405a9e11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-305397 -n force-systemd-env-305397
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-305397 -n force-systemd-env-305397: exit status 6 (369.916517ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:13:30.569424 3333233 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-305397" does not appear in /home/jenkins/minikube-integration/22402-3122619/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-305397 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-017834 sudo cat /var/lib/kubelet/config.yaml                                                                            │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo systemctl status docker --all --full --no-pager                                                             │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo systemctl cat docker --no-pager                                                                             │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo cat /etc/docker/daemon.json                                                                                 │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo docker system info                                                                                          │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo systemctl status cri-docker --all --full --no-pager                                                         │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo systemctl cat cri-docker --no-pager                                                                         │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                    │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo cat /usr/lib/systemd/system/cri-docker.service                                                              │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo cri-dockerd --version                                                                                       │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo systemctl status containerd --all --full --no-pager                                                         │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo systemctl cat containerd --no-pager                                                                         │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo cat /lib/systemd/system/containerd.service                                                                  │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo cat /etc/containerd/config.toml                                                                             │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo containerd config dump                                                                                      │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo systemctl status crio --all --full --no-pager                                                               │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo systemctl cat crio --no-pager                                                                               │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                     │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ ssh     │ -p cilium-017834 sudo crio config                                                                                                 │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │                     │
	│ delete  │ -p cilium-017834                                                                                                                  │ cilium-017834             │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │ 11 Jan 26 08:08 UTC │
	│ start   │ -p cert-expiration-192657 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                      │ cert-expiration-192657    │ jenkins │ v1.37.0 │ 11 Jan 26 08:08 UTC │ 11 Jan 26 08:08 UTC │
	│ start   │ -p cert-expiration-192657 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                   │ cert-expiration-192657    │ jenkins │ v1.37.0 │ 11 Jan 26 08:11 UTC │ 11 Jan 26 08:11 UTC │
	│ delete  │ -p cert-expiration-192657                                                                                                         │ cert-expiration-192657    │ jenkins │ v1.37.0 │ 11 Jan 26 08:11 UTC │ 11 Jan 26 08:11 UTC │
	│ start   │ -p force-systemd-flag-610060 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-flag-610060 │ jenkins │ v1.37.0 │ 11 Jan 26 08:11 UTC │                     │
	│ ssh     │ force-systemd-env-305397 ssh cat /etc/containerd/config.toml                                                                      │ force-systemd-env-305397  │ jenkins │ v1.37.0 │ 11 Jan 26 08:13 UTC │ 11 Jan 26 08:13 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:11:45
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:11:45.966483 3329885 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:11:45.966703 3329885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:11:45.966732 3329885 out.go:374] Setting ErrFile to fd 2...
	I0111 08:11:45.966751 3329885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:11:45.967177 3329885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 08:11:45.968023 3329885 out.go:368] Setting JSON to false
	I0111 08:11:45.968908 3329885 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":50057,"bootTime":1768069049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 08:11:45.968983 3329885 start.go:143] virtualization:  
	I0111 08:11:45.972677 3329885 out.go:179] * [force-systemd-flag-610060] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:11:45.977345 3329885 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:11:45.977453 3329885 notify.go:221] Checking for updates...
	I0111 08:11:45.984099 3329885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:11:45.987358 3329885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 08:11:45.990611 3329885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 08:11:45.993730 3329885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:11:45.996854 3329885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:11:46.002916 3329885 config.go:182] Loaded profile config "force-systemd-env-305397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:11:46.003074 3329885 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:11:46.034142 3329885 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:11:46.034275 3329885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:11:46.125120 3329885 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:11:46.113366797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:11:46.125226 3329885 docker.go:319] overlay module found
	I0111 08:11:46.128633 3329885 out.go:179] * Using the docker driver based on user configuration
	I0111 08:11:46.131564 3329885 start.go:309] selected driver: docker
	I0111 08:11:46.131591 3329885 start.go:928] validating driver "docker" against <nil>
	I0111 08:11:46.131605 3329885 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:11:46.132458 3329885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:11:46.188583 3329885 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:11:46.179395708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:11:46.188739 3329885 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:11:46.188960 3329885 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:11:46.191962 3329885 out.go:179] * Using Docker driver with root privileges
	I0111 08:11:46.194890 3329885 cni.go:84] Creating CNI manager for ""
	I0111 08:11:46.194959 3329885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:11:46.194975 3329885 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 08:11:46.195053 3329885 start.go:353] cluster config:
	{Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I0111 08:11:46.200121 3329885 out.go:179] * Starting "force-systemd-flag-610060" primary control-plane node in "force-systemd-flag-610060" cluster
	I0111 08:11:46.203055 3329885 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0111 08:11:46.206054 3329885 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:11:46.208898 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:11:46.208958 3329885 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
	I0111 08:11:46.208971 3329885 cache.go:65] Caching tarball of preloaded images
	I0111 08:11:46.208985 3329885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:11:46.209059 3329885 preload.go:251] Found /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0111 08:11:46.209070 3329885 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on containerd
	I0111 08:11:46.209177 3329885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json ...
	I0111 08:11:46.209198 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json: {Name:mke00c980f6aa6c98163914c28e2b3a0179313f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:46.228792 3329885 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:11:46.228814 3329885 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:11:46.228829 3329885 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:11:46.228857 3329885 start.go:360] acquireMachinesLock for force-systemd-flag-610060: {Name:mk7b285d446b288e2ef1025bb5bf30ad660e990b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:11:46.228963 3329885 start.go:364] duration metric: took 84.946µs to acquireMachinesLock for "force-systemd-flag-610060"
	I0111 08:11:46.228995 3329885 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0111 08:11:46.229072 3329885 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:11:46.232524 3329885 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:11:46.232749 3329885 start.go:159] libmachine.API.Create for "force-systemd-flag-610060" (driver="docker")
	I0111 08:11:46.232785 3329885 client.go:173] LocalClient.Create starting
	I0111 08:11:46.232857 3329885 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem
	I0111 08:11:46.232894 3329885 main.go:144] libmachine: Decoding PEM data...
	I0111 08:11:46.232913 3329885 main.go:144] libmachine: Parsing certificate...
	I0111 08:11:46.232970 3329885 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem
	I0111 08:11:46.232992 3329885 main.go:144] libmachine: Decoding PEM data...
	I0111 08:11:46.233007 3329885 main.go:144] libmachine: Parsing certificate...
	I0111 08:11:46.233367 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:11:46.250050 3329885 cli_runner.go:211] docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:11:46.250150 3329885 network_create.go:284] running [docker network inspect force-systemd-flag-610060] to gather additional debugging logs...
	I0111 08:11:46.250170 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060
	W0111 08:11:46.264851 3329885 cli_runner.go:211] docker network inspect force-systemd-flag-610060 returned with exit code 1
	I0111 08:11:46.264883 3329885 network_create.go:287] error running [docker network inspect force-systemd-flag-610060]: docker network inspect force-systemd-flag-610060: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-610060 not found
	I0111 08:11:46.264896 3329885 network_create.go:289] output of [docker network inspect force-systemd-flag-610060]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-610060 not found
	
	** /stderr **
	I0111 08:11:46.265009 3329885 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:11:46.281585 3329885 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d6a2604bb10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:cd:63:f9:b2:f8} reservation:<nil>}
	I0111 08:11:46.281997 3329885 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cec031213447 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:71:bf:56:ac:cb} reservation:<nil>}
	I0111 08:11:46.282212 3329885 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0e2d137ca1da IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:68:81:9e:35:63} reservation:<nil>}
	I0111 08:11:46.282485 3329885 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9455289443b5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:d1:66:6a:84:dd} reservation:<nil>}
	I0111 08:11:46.282935 3329885 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a129c0}
	I0111 08:11:46.282958 3329885 network_create.go:124] attempt to create docker network force-systemd-flag-610060 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 08:11:46.283014 3329885 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-610060 force-systemd-flag-610060
	I0111 08:11:46.338524 3329885 network_create.go:108] docker network force-systemd-flag-610060 192.168.85.0/24 created
	I0111 08:11:46.338555 3329885 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-610060" container
	I0111 08:11:46.338639 3329885 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:11:46.354768 3329885 cli_runner.go:164] Run: docker volume create force-systemd-flag-610060 --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:11:46.372694 3329885 oci.go:103] Successfully created a docker volume force-systemd-flag-610060
	I0111 08:11:46.372798 3329885 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-610060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --entrypoint /usr/bin/test -v force-systemd-flag-610060:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:11:46.921885 3329885 oci.go:107] Successfully prepared a docker volume force-systemd-flag-610060
	I0111 08:11:46.921940 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:11:46.921951 3329885 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:11:46.922032 3329885 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-610060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:11:50.731187 3329885 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-610060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.809106983s)
	I0111 08:11:50.731222 3329885 kic.go:203] duration metric: took 3.80926748s to extract preloaded images to volume ...
	W0111 08:11:50.731361 3329885 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:11:50.731477 3329885 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:11:50.797692 3329885 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-610060 --name force-systemd-flag-610060 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-610060 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-610060 --network force-systemd-flag-610060 --ip 192.168.85.2 --volume force-systemd-flag-610060:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:11:51.110888 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Running}}
	I0111 08:11:51.136837 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
	I0111 08:11:51.165956 3329885 cli_runner.go:164] Run: docker exec force-systemd-flag-610060 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:11:51.215991 3329885 oci.go:144] the created container "force-systemd-flag-610060" has a running status.
	I0111 08:11:51.216037 3329885 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa...
	I0111 08:11:51.516534 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:11:51.516633 3329885 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:11:51.539007 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
	I0111 08:11:51.567105 3329885 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:11:51.567123 3329885 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-610060 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:11:51.645455 3329885 cli_runner.go:164] Run: docker container inspect force-systemd-flag-610060 --format={{.State.Status}}
	I0111 08:11:51.680580 3329885 machine.go:94] provisionDockerMachine start ...
	I0111 08:11:51.680675 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:51.710716 3329885 main.go:144] libmachine: Using SSH client type: native
	I0111 08:11:51.711064 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0111 08:11:51.711073 3329885 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:11:51.711854 3329885 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 08:11:54.859728 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-610060
	
	I0111 08:11:54.859755 3329885 ubuntu.go:182] provisioning hostname "force-systemd-flag-610060"
	I0111 08:11:54.859827 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:54.876832 3329885 main.go:144] libmachine: Using SSH client type: native
	I0111 08:11:54.877152 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0111 08:11:54.877172 3329885 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-610060 && echo "force-systemd-flag-610060" | sudo tee /etc/hostname
	I0111 08:11:55.043732 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-610060
	
	I0111 08:11:55.043827 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.066688 3329885 main.go:144] libmachine: Using SSH client type: native
	I0111 08:11:55.067032 3329885 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0111 08:11:55.067054 3329885 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-610060' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-610060/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-610060' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:11:55.224621 3329885 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:11:55.224644 3329885 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-3122619/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-3122619/.minikube}
	I0111 08:11:55.224663 3329885 ubuntu.go:190] setting up certificates
	I0111 08:11:55.224672 3329885 provision.go:84] configureAuth start
	I0111 08:11:55.224733 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
	I0111 08:11:55.242257 3329885 provision.go:143] copyHostCerts
	I0111 08:11:55.242309 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
	I0111 08:11:55.242342 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem, removing ...
	I0111 08:11:55.242359 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem
	I0111 08:11:55.242440 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.pem (1078 bytes)
	I0111 08:11:55.242520 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
	I0111 08:11:55.242542 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem, removing ...
	I0111 08:11:55.242556 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem
	I0111 08:11:55.242586 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/cert.pem (1123 bytes)
	I0111 08:11:55.242658 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
	I0111 08:11:55.242679 3329885 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem, removing ...
	I0111 08:11:55.242686 3329885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem
	I0111 08:11:55.242713 3329885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-3122619/.minikube/key.pem (1675 bytes)
	I0111 08:11:55.242763 3329885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-610060 san=[127.0.0.1 192.168.85.2 force-systemd-flag-610060 localhost minikube]
	I0111 08:11:55.423643 3329885 provision.go:177] copyRemoteCerts
	I0111 08:11:55.423714 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:11:55.423760 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.442089 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:55.544114 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:11:55.544174 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0111 08:11:55.562451 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:11:55.562560 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:11:55.579624 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:11:55.579720 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:11:55.597215 3329885 provision.go:87] duration metric: took 372.519842ms to configureAuth
	I0111 08:11:55.597285 3329885 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:11:55.597493 3329885 config.go:182] Loaded profile config "force-systemd-flag-610060": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:11:55.597509 3329885 machine.go:97] duration metric: took 3.916909939s to provisionDockerMachine
	I0111 08:11:55.597517 3329885 client.go:176] duration metric: took 9.364722727s to LocalClient.Create
	I0111 08:11:55.597537 3329885 start.go:167] duration metric: took 9.364789212s to libmachine.API.Create "force-systemd-flag-610060"
	I0111 08:11:55.597550 3329885 start.go:293] postStartSetup for "force-systemd-flag-610060" (driver="docker")
	I0111 08:11:55.597559 3329885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:11:55.597617 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:11:55.597673 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.614880 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:55.720221 3329885 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:11:55.723472 3329885 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:11:55.723501 3329885 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:11:55.723512 3329885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/addons for local assets ...
	I0111 08:11:55.723589 3329885 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-3122619/.minikube/files for local assets ...
	I0111 08:11:55.723683 3329885 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> 31244842.pem in /etc/ssl/certs
	I0111 08:11:55.723702 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> /etc/ssl/certs/31244842.pem
	I0111 08:11:55.723821 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:11:55.731084 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /etc/ssl/certs/31244842.pem (1708 bytes)
	I0111 08:11:55.748115 3329885 start.go:296] duration metric: took 150.541395ms for postStartSetup
	I0111 08:11:55.748506 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
	I0111 08:11:55.765507 3329885 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/config.json ...
	I0111 08:11:55.765856 3329885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:11:55.765912 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.782246 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:55.885136 3329885 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:11:55.889854 3329885 start.go:128] duration metric: took 9.66076858s to createHost
	I0111 08:11:55.889877 3329885 start.go:83] releasing machines lock for "force-systemd-flag-610060", held for 9.660899777s
	I0111 08:11:55.889946 3329885 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-610060
	I0111 08:11:55.909521 3329885 ssh_runner.go:195] Run: cat /version.json
	I0111 08:11:55.909572 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.909672 3329885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:11:55.909730 3329885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-610060
	I0111 08:11:55.929746 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:55.940401 3329885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/force-systemd-flag-610060/id_rsa Username:docker}
	I0111 08:11:56.137299 3329885 ssh_runner.go:195] Run: systemctl --version
	I0111 08:11:56.144072 3329885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:11:56.149649 3329885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:11:56.149741 3329885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:11:56.178398 3329885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:11:56.178426 3329885 start.go:496] detecting cgroup driver to use...
	I0111 08:11:56.178440 3329885 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:11:56.178497 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0111 08:11:56.194017 3329885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0111 08:11:56.207355 3329885 docker.go:218] disabling cri-docker service (if available) ...
	I0111 08:11:56.207437 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0111 08:11:56.225243 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0111 08:11:56.244325 3329885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0111 08:11:56.364184 3329885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0111 08:11:56.477116 3329885 docker.go:234] disabling docker service ...
	I0111 08:11:56.477205 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0111 08:11:56.497704 3329885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0111 08:11:56.510638 3329885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0111 08:11:56.657297 3329885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0111 08:11:56.780195 3329885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:11:56.793449 3329885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:11:56.808590 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0111 08:11:56.818025 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0111 08:11:56.826953 3329885 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0111 08:11:56.827070 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0111 08:11:56.836326 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:11:56.845203 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0111 08:11:56.854138 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:11:56.862604 3329885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:11:56.870988 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0111 08:11:56.879524 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0111 08:11:56.888444 3329885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0111 08:11:56.897577 3329885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:11:56.905221 3329885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:11:56.912587 3329885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:57.029083 3329885 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0111 08:11:57.166782 3329885 start.go:553] Will wait 60s for socket path /run/containerd/containerd.sock
	I0111 08:11:57.166926 3329885 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0111 08:11:57.170932 3329885 start.go:574] Will wait 60s for crictl version
	I0111 08:11:57.171048 3329885 ssh_runner.go:195] Run: which crictl
	I0111 08:11:57.174867 3329885 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:11:57.199898 3329885 start.go:590] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.1
	RuntimeApiVersion:  v1
	I0111 08:11:57.199981 3329885 ssh_runner.go:195] Run: containerd --version
	I0111 08:11:57.219306 3329885 ssh_runner.go:195] Run: containerd --version
	I0111 08:11:57.244995 3329885 out.go:179] * Preparing Kubernetes v1.35.0 on containerd 2.2.1 ...
	I0111 08:11:57.248122 3329885 cli_runner.go:164] Run: docker network inspect force-systemd-flag-610060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:11:57.264038 3329885 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 08:11:57.267824 3329885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:11:57.277801 3329885 kubeadm.go:884] updating cluster {Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:11:57.278152 3329885 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
	I0111 08:11:57.278240 3329885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:11:57.315254 3329885 containerd.go:635] all images are preloaded for containerd runtime.
	I0111 08:11:57.315275 3329885 containerd.go:542] Images already preloaded, skipping extraction
	I0111 08:11:57.315336 3329885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0111 08:11:57.349393 3329885 containerd.go:635] all images are preloaded for containerd runtime.
	I0111 08:11:57.349415 3329885 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:11:57.349423 3329885 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 containerd true true} ...
	I0111 08:11:57.349517 3329885 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-610060 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:11:57.349582 3329885 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I0111 08:11:57.382639 3329885 cni.go:84] Creating CNI manager for ""
	I0111 08:11:57.382663 3329885 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 08:11:57.382685 3329885 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:11:57.382708 3329885 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-610060 NodeName:force-systemd-flag-610060 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:11:57.382828 3329885 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "force-systemd-flag-610060"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:11:57.382905 3329885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:11:57.390559 3329885 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:11:57.390630 3329885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:11:57.398214 3329885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0111 08:11:57.410850 3329885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:11:57.424327 3329885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I0111 08:11:57.436984 3329885 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:11:57.440400 3329885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:11:57.450402 3329885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:57.573600 3329885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:11:57.590952 3329885 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060 for IP: 192.168.85.2
	I0111 08:11:57.590987 3329885 certs.go:195] generating shared ca certs ...
	I0111 08:11:57.591004 3329885 certs.go:227] acquiring lock for ca certs: {Name:mk4f88e5992499f3a8089baf463e3ba7f81a52c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:57.591198 3329885 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key
	I0111 08:11:57.591246 3329885 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key
	I0111 08:11:57.591260 3329885 certs.go:257] generating profile certs ...
	I0111 08:11:57.591327 3329885 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key
	I0111 08:11:57.591359 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt with IP's: []
	I0111 08:11:58.180659 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt ...
	I0111 08:11:58.180706 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.crt: {Name:mk9bd0b635b7181a879895561a6d686f28614647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.180963 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key ...
	I0111 08:11:58.180982 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/client.key: {Name:mkfe2120f2e6288c7ad6ca3b08d9dccc6b76b069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.181090 3329885 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120
	I0111 08:11:58.181117 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 08:11:58.711099 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 ...
	I0111 08:11:58.711132 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120: {Name:mke960834fa45cb1bccf7b579ab4a287f777445c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.711369 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120 ...
	I0111 08:11:58.711385 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120: {Name:mkf73783f074957828edc09fa9ea5a4548656c1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.711473 3329885 certs.go:382] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt.bb1d5120 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt
	I0111 08:11:58.711554 3329885 certs.go:386] copying /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key.bb1d5120 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key
	I0111 08:11:58.711646 3329885 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key
	I0111 08:11:58.711665 3329885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt with IP's: []
	I0111 08:11:58.912664 3329885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt ...
	I0111 08:11:58.912696 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt: {Name:mk922fc5010cb627196768e155857c21dcb7d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.912882 3329885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key ...
	I0111 08:11:58.912895 3329885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key: {Name:mk6bffbe07eace11218581bafe3df67bbad9745d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:58.912983 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:11:58.913003 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:11:58.913015 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:11:58.913030 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:11:58.913042 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:11:58.913059 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:11:58.913074 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:11:58.913089 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:11:58.913153 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem (1338 bytes)
	W0111 08:11:58.913196 3329885 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484_empty.pem, impossibly tiny 0 bytes
	I0111 08:11:58.913209 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca-key.pem (1679 bytes)
	I0111 08:11:58.913237 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/ca.pem (1078 bytes)
	I0111 08:11:58.913264 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:11:58.913300 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/key.pem (1675 bytes)
	I0111 08:11:58.913351 3329885 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem (1708 bytes)
	I0111 08:11:58.913383 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:58.913398 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem -> /usr/share/ca-certificates/3124484.pem
	I0111 08:11:58.913409 3329885 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem -> /usr/share/ca-certificates/31244842.pem
	I0111 08:11:58.913910 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:11:58.934507 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 08:11:58.955410 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:11:58.973948 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:11:58.992574 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:11:59.013013 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:11:59.031246 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:11:59.051982 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/force-systemd-flag-610060/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:11:59.070932 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:11:59.088240 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/certs/3124484.pem --> /usr/share/ca-certificates/3124484.pem (1338 bytes)
	I0111 08:11:59.106023 3329885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/ssl/certs/31244842.pem --> /usr/share/ca-certificates/31244842.pem (1708 bytes)
	I0111 08:11:59.124702 3329885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:11:59.137901 3329885 ssh_runner.go:195] Run: openssl version
	I0111 08:11:59.144226 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/31244842.pem
	I0111 08:11:59.152606 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/31244842.pem /etc/ssl/certs/31244842.pem
	I0111 08:11:59.160337 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31244842.pem
	I0111 08:11:59.164263 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:32 /usr/share/ca-certificates/31244842.pem
	I0111 08:11:59.164416 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31244842.pem
	I0111 08:11:59.206887 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:11:59.214713 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/31244842.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:11:59.222480 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:59.230140 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:11:59.238568 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:59.242380 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:26 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:59.242451 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:59.283430 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:11:59.291242 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:11:59.299017 3329885 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3124484.pem
	I0111 08:11:59.306462 3329885 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3124484.pem /etc/ssl/certs/3124484.pem
	I0111 08:11:59.314159 3329885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3124484.pem
	I0111 08:11:59.318199 3329885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:32 /usr/share/ca-certificates/3124484.pem
	I0111 08:11:59.318267 3329885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3124484.pem
	I0111 08:11:59.364365 3329885 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:11:59.372018 3329885 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3124484.pem /etc/ssl/certs/51391683.0
	I0111 08:11:59.379565 3329885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:11:59.383252 3329885 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:11:59.383305 3329885 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-610060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-610060 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:11:59.383396 3329885 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0111 08:11:59.383462 3329885 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0111 08:11:59.409520 3329885 cri.go:96] found id: ""
	I0111 08:11:59.409625 3329885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:11:59.417554 3329885 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:11:59.425266 3329885 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:11:59.425333 3329885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:11:59.433014 3329885 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:11:59.433033 3329885 kubeadm.go:158] found existing configuration files:
	
	I0111 08:11:59.433106 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:11:59.441062 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:11:59.441144 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:11:59.448415 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:11:59.456088 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:11:59.456158 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:11:59.463696 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:11:59.471473 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:11:59.471550 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:11:59.479035 3329885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:11:59.486818 3329885 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:11:59.486907 3329885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:11:59.494369 3329885 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:11:59.531469 3329885 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:11:59.531535 3329885 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:11:59.616591 3329885 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:11:59.616667 3329885 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:11:59.616707 3329885 kubeadm.go:319] OS: Linux
	I0111 08:11:59.616757 3329885 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:11:59.616809 3329885 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:11:59.616860 3329885 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:11:59.616913 3329885 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:11:59.616966 3329885 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:11:59.617026 3329885 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:11:59.617076 3329885 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:11:59.617128 3329885 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:11:59.617177 3329885 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:11:59.680028 3329885 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:11:59.680143 3329885 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:11:59.680238 3329885 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:11:59.688820 3329885 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:11:59.695891 3329885 out.go:252]   - Generating certificates and keys ...
	I0111 08:11:59.696068 3329885 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:11:59.696180 3329885 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:11:59.888200 3329885 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:12:00.676065 3329885 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:12:00.930267 3329885 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:12:01.030505 3329885 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:12:01.283889 3329885 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:12:01.284218 3329885 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:12:01.834107 3329885 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:12:01.834425 3329885 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-610060 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:12:01.879677 3329885 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:12:02.051499 3329885 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:12:02.379706 3329885 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:12:02.379938 3329885 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:12:02.595602 3329885 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:12:03.030736 3329885 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:12:03.387448 3329885 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:12:03.538058 3329885 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:12:04.600361 3329885 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:12:04.601328 3329885 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:12:04.604433 3329885 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:12:04.608234 3329885 out.go:252]   - Booting up control plane ...
	I0111 08:12:04.608345 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:12:04.608424 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:12:04.609166 3329885 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:12:04.626788 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:12:04.626897 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:12:04.634879 3329885 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:12:04.635665 3329885 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:12:04.635945 3329885 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:12:04.773900 3329885 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:12:04.774028 3329885 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:13:29.329137 3308660 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000911202s
	I0111 08:13:29.329164 3308660 kubeadm.go:319] 
	I0111 08:13:29.329222 3308660 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:13:29.329256 3308660 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:13:29.329360 3308660 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:13:29.329365 3308660 kubeadm.go:319] 
	I0111 08:13:29.329469 3308660 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:13:29.329502 3308660 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:13:29.329558 3308660 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:13:29.329564 3308660 kubeadm.go:319] 
	I0111 08:13:29.336690 3308660 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:13:29.337118 3308660 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:13:29.337230 3308660 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:13:29.337486 3308660 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:13:29.337495 3308660 kubeadm.go:319] 
	I0111 08:13:29.337570 3308660 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:13:29.337634 3308660 kubeadm.go:403] duration metric: took 8m6.811988233s to StartCluster
	I0111 08:13:29.337685 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0111 08:13:29.337753 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 08:13:29.362621 3308660 cri.go:96] found id: ""
	I0111 08:13:29.362671 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.362712 3308660 logs.go:284] No container was found matching "kube-apiserver"
	I0111 08:13:29.362721 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0111 08:13:29.362797 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 08:13:29.388483 3308660 cri.go:96] found id: ""
	I0111 08:13:29.388509 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.388518 3308660 logs.go:284] No container was found matching "etcd"
	I0111 08:13:29.388524 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0111 08:13:29.388583 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 08:13:29.414014 3308660 cri.go:96] found id: ""
	I0111 08:13:29.414039 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.414048 3308660 logs.go:284] No container was found matching "coredns"
	I0111 08:13:29.414054 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0111 08:13:29.414115 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 08:13:29.439048 3308660 cri.go:96] found id: ""
	I0111 08:13:29.439073 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.439081 3308660 logs.go:284] No container was found matching "kube-scheduler"
	I0111 08:13:29.439088 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0111 08:13:29.439147 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 08:13:29.466496 3308660 cri.go:96] found id: ""
	I0111 08:13:29.466520 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.466529 3308660 logs.go:284] No container was found matching "kube-proxy"
	I0111 08:13:29.466536 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 08:13:29.466613 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 08:13:29.491417 3308660 cri.go:96] found id: ""
	I0111 08:13:29.491443 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.491465 3308660 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 08:13:29.491472 3308660 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0111 08:13:29.491530 3308660 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 08:13:29.515542 3308660 cri.go:96] found id: ""
	I0111 08:13:29.515567 3308660 logs.go:282] 0 containers: []
	W0111 08:13:29.515576 3308660 logs.go:284] No container was found matching "kindnet"
	I0111 08:13:29.515585 3308660 logs.go:123] Gathering logs for kubelet ...
	I0111 08:13:29.515596 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 08:13:29.574593 3308660 logs.go:123] Gathering logs for dmesg ...
	I0111 08:13:29.574630 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 08:13:29.591579 3308660 logs.go:123] Gathering logs for describe nodes ...
	I0111 08:13:29.591607 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 08:13:29.666942 3308660 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:13:29.658562    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.659168    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.660918    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.661530    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.663019    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 08:13:29.658562    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.659168    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.660918    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.661530    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:29.663019    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 08:13:29.667008 3308660 logs.go:123] Gathering logs for containerd ...
	I0111 08:13:29.667028 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0111 08:13:29.706334 3308660 logs.go:123] Gathering logs for container status ...
	I0111 08:13:29.706368 3308660 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 08:13:29.737417 3308660 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000911202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 08:13:29.737480 3308660 out.go:285] * 
	W0111 08:13:29.737571 3308660 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000911202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:13:29.737588 3308660 out.go:285] * 
	W0111 08:13:29.737895 3308660 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:13:29.744161 3308660 out.go:203] 
	W0111 08:13:29.747235 3308660 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000911202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:13:29.747307 3308660 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 08:13:29.747331 3308660 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 08:13:29.750573 3308660 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434093945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434265961Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434414610Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434494362Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434554373Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434614277Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434671079Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434739427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434828721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.434974670Z" level=info msg="Connect containerd service"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.435383013Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.436067329Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.467776186Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.468028111Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.467956925Z" level=info msg="Start subscribing containerd event"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.468347668Z" level=info msg="Start recovering state"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.542042172Z" level=info msg="Start event monitor"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.542086618Z" level=info msg="Start cni network conf syncer for default"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.542095578Z" level=info msg="Start streaming server"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.542105407Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.542145168Z" level=info msg="runtime interface starting up..."
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.542155047Z" level=info msg="starting plugins..."
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.542170119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 11 08:05:20 force-systemd-env-305397 containerd[762]: time="2026-01-11T08:05:20.542319383Z" level=info msg="containerd successfully booted in 0.191210s"
	Jan 11 08:05:20 force-systemd-env-305397 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:13:31.204673    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:31.205148    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:31.206738    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:31.207196    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:13:31.208725    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan11 07:19] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jan11 07:25] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 08:13:31 up 13:56,  0 user,  load average: 0.47, 1.29, 1.99
	Linux force-systemd-env-305397 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 11 08:13:27 force-systemd-env-305397 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:13:28 force-systemd-env-305397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Jan 11 08:13:28 force-systemd-env-305397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:13:28 force-systemd-env-305397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:13:28 force-systemd-env-305397 kubelet[4762]: E0111 08:13:28.341759    4762 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:13:28 force-systemd-env-305397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:13:28 force-systemd-env-305397 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:13:29 force-systemd-env-305397 kubelet[4767]: E0111 08:13:29.085520    4767 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:13:29 force-systemd-env-305397 kubelet[4852]: E0111 08:13:29.860197    4852 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:13:29 force-systemd-env-305397 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:13:30 force-systemd-env-305397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 11 08:13:30 force-systemd-env-305397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:13:30 force-systemd-env-305397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:13:30 force-systemd-env-305397 kubelet[4878]: E0111 08:13:30.540492    4878 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:13:30 force-systemd-env-305397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:13:30 force-systemd-env-305397 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-305397 -n force-systemd-env-305397
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-305397 -n force-systemd-env-305397: exit status 6 (355.021046ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:13:31.696589 3333463 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-305397" does not appear in /home/jenkins/minikube-integration/22402-3122619/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-305397" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-305397" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-305397
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-305397: (1.979419453s)
--- FAIL: TestForceSystemdEnv (506.41s)

                                                
                                    

Test pass (305/337)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.17
9 TestDownloadOnly/v1.28.0/DeleteAll 0.36
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.25
12 TestDownloadOnly/v1.35.0/json-events 3.98
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.08
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 121.22
29 TestAddons/serial/Volcano 39.66
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.85
35 TestAddons/parallel/Registry 15.33
36 TestAddons/parallel/RegistryCreds 0.81
37 TestAddons/parallel/Ingress 18.75
38 TestAddons/parallel/InspektorGadget 11.79
39 TestAddons/parallel/MetricsServer 5.94
41 TestAddons/parallel/CSI 50.87
42 TestAddons/parallel/Headlamp 17.88
43 TestAddons/parallel/CloudSpanner 5.61
44 TestAddons/parallel/LocalPath 53.13
45 TestAddons/parallel/NvidiaDevicePlugin 5.9
46 TestAddons/parallel/Yakd 11.98
48 TestAddons/StoppedEnableDisable 12.6
49 TestCertOptions 30.55
50 TestCertExpiration 215.44
54 TestDockerEnvContainerd 43.32
58 TestErrorSpam/setup 26.32
59 TestErrorSpam/start 0.82
60 TestErrorSpam/status 1.19
61 TestErrorSpam/pause 1.86
62 TestErrorSpam/unpause 1.74
63 TestErrorSpam/stop 1.64
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 43.02
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.16
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.13
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.26
75 TestFunctional/serial/CacheCmd/cache/add_local 1.24
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 42.75
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.5
86 TestFunctional/serial/LogsFileCmd 1.55
87 TestFunctional/serial/InvalidService 4.47
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 9.58
91 TestFunctional/parallel/DryRun 0.55
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.14
97 TestFunctional/parallel/ServiceCmdConnect 8.6
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 20.72
101 TestFunctional/parallel/SSHCmd 0.75
102 TestFunctional/parallel/CpCmd 2.27
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.32
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
113 TestFunctional/parallel/License 0.34
114 TestFunctional/parallel/Version/short 0.09
115 TestFunctional/parallel/Version/components 1.36
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.28
121 TestFunctional/parallel/ImageCommands/Setup 0.64
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.56
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.37
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
131 TestFunctional/parallel/ProfileCmd/profile_list 0.53
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.87
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.4
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
147 TestFunctional/parallel/MountCmd/any-port 8.65
148 TestFunctional/parallel/ServiceCmd/List 0.52
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
151 TestFunctional/parallel/ServiceCmd/Format 0.4
152 TestFunctional/parallel/ServiceCmd/URL 0.41
153 TestFunctional/parallel/MountCmd/specific-port 2.24
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 184.68
163 TestMultiControlPlane/serial/DeployApp 7.35
164 TestMultiControlPlane/serial/PingHostFromPods 1.55
165 TestMultiControlPlane/serial/AddWorkerNode 31.11
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
168 TestMultiControlPlane/serial/CopyFile 20.44
169 TestMultiControlPlane/serial/StopSecondaryNode 13
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.16
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 100.26
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.3
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
176 TestMultiControlPlane/serial/StopCluster 36.29
177 TestMultiControlPlane/serial/RestartCluster 60.48
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.84
179 TestMultiControlPlane/serial/AddSecondaryNode 57.71
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 48.5
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.71
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.66
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.07
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 34.92
211 TestKicCustomNetwork/use_default_bridge_network 28.51
212 TestKicExistingNetwork 30.12
213 TestKicCustomSubnet 29.75
214 TestKicStaticIP 32.57
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 61.66
219 TestMountStart/serial/StartWithMountFirst 8.54
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 8.64
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.32
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.7
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 73.33
231 TestMultiNode/serial/DeployApp2Nodes 5.69
232 TestMultiNode/serial/PingHostFrom2Pods 1.06
233 TestMultiNode/serial/AddNode 28.96
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.49
237 TestMultiNode/serial/StopNode 2.46
238 TestMultiNode/serial/StartAfterStop 8.26
239 TestMultiNode/serial/RestartKeepsNodes 77.21
240 TestMultiNode/serial/DeleteNode 5.77
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 53.26
243 TestMultiNode/serial/ValidateNameConflict 31.31
250 TestScheduledStopUnix 102.31
253 TestInsufficientStorage 12.5
254 TestRunningBinaryUpgrade 328.98
256 TestKubernetesUpgrade 85.13
257 TestMissingContainerUpgrade 152.19
259 TestPause/serial/Start 54.44
260 TestPause/serial/SecondStartNoReconfiguration 8.51
261 TestPause/serial/Pause 0.9
262 TestPause/serial/VerifyStatus 0.43
263 TestPause/serial/Unpause 0.89
264 TestPause/serial/PauseAgain 1.3
265 TestPause/serial/DeletePaused 3.28
266 TestPause/serial/VerifyDeletedResources 0.2
267 TestStoppedBinaryUpgrade/Setup 0.91
268 TestStoppedBinaryUpgrade/Upgrade 313.98
269 TestStoppedBinaryUpgrade/MinikubeLogs 5.23
277 TestPreload/Start-NoPreload-PullImage 65.51
278 TestPreload/Restart-With-Preload-Check-User-Image 50.81
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
282 TestNoKubernetes/serial/StartWithK8s 27.6
283 TestNoKubernetes/serial/StartWithStopK8s 16.22
284 TestNoKubernetes/serial/Start 7.33
285 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
287 TestNoKubernetes/serial/ProfileList 1.06
288 TestNoKubernetes/serial/Stop 1.32
289 TestNoKubernetes/serial/StartNoArgs 6.64
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
298 TestNetworkPlugins/group/false 3.66
303 TestStartStop/group/old-k8s-version/serial/FirstStart 60.47
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
306 TestStartStop/group/old-k8s-version/serial/Stop 12.11
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/old-k8s-version/serial/SecondStart 51.72
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
312 TestStartStop/group/old-k8s-version/serial/Pause 3.07
314 TestStartStop/group/no-preload/serial/FirstStart 51.09
315 TestStartStop/group/no-preload/serial/DeployApp 9.34
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
317 TestStartStop/group/no-preload/serial/Stop 12.08
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
319 TestStartStop/group/no-preload/serial/SecondStart 53.47
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
323 TestStartStop/group/no-preload/serial/Pause 3.17
325 TestStartStop/group/embed-certs/serial/FirstStart 46.87
326 TestStartStop/group/embed-certs/serial/DeployApp 9.34
327 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
328 TestStartStop/group/embed-certs/serial/Stop 12.42
330 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.58
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
332 TestStartStop/group/embed-certs/serial/SecondStart 51.2
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.19
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
339 TestStartStop/group/embed-certs/serial/Pause 3.02
341 TestStartStop/group/newest-cni/serial/FirstStart 35.88
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.59
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
346 TestStartStop/group/newest-cni/serial/Stop 1.36
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
348 TestStartStop/group/newest-cni/serial/SecondStart 15.35
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
352 TestStartStop/group/newest-cni/serial/Pause 3.26
353 TestPreload/PreloadSrc/gcs 4.67
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
355 TestPreload/PreloadSrc/github 4.13
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
357 TestPreload/PreloadSrc/gcs-cached 0.53
358 TestNetworkPlugins/group/auto/Start 53.28
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.4
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.53
361 TestNetworkPlugins/group/flannel/Start 57.1
362 TestNetworkPlugins/group/auto/KubeletFlags 0.31
363 TestNetworkPlugins/group/auto/NetCatPod 10.3
364 TestNetworkPlugins/group/auto/DNS 0.2
365 TestNetworkPlugins/group/auto/Localhost 0.18
366 TestNetworkPlugins/group/auto/HairPin 0.16
367 TestNetworkPlugins/group/flannel/ControllerPod 6.01
368 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
369 TestNetworkPlugins/group/flannel/NetCatPod 11.37
370 TestNetworkPlugins/group/flannel/DNS 0.2
371 TestNetworkPlugins/group/flannel/Localhost 0.17
372 TestNetworkPlugins/group/flannel/HairPin 0.22
373 TestNetworkPlugins/group/calico/Start 64.24
374 TestNetworkPlugins/group/custom-flannel/Start 59.56
375 TestNetworkPlugins/group/calico/ControllerPod 6.01
376 TestNetworkPlugins/group/calico/KubeletFlags 0.3
377 TestNetworkPlugins/group/calico/NetCatPod 9.33
378 TestNetworkPlugins/group/calico/DNS 0.22
379 TestNetworkPlugins/group/calico/Localhost 0.23
380 TestNetworkPlugins/group/calico/HairPin 0.17
381 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
382 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
383 TestNetworkPlugins/group/custom-flannel/DNS 0.31
384 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
385 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
386 TestNetworkPlugins/group/kindnet/Start 53.08
387 TestNetworkPlugins/group/bridge/Start 43.57
388 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
389 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
390 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
392 TestNetworkPlugins/group/bridge/NetCatPod 10.38
393 TestNetworkPlugins/group/kindnet/DNS 0.16
394 TestNetworkPlugins/group/kindnet/Localhost 0.14
395 TestNetworkPlugins/group/kindnet/HairPin 0.16
396 TestNetworkPlugins/group/bridge/DNS 0.17
397 TestNetworkPlugins/group/bridge/Localhost 0.16
398 TestNetworkPlugins/group/bridge/HairPin 0.15
399 TestNetworkPlugins/group/enable-default-cni/Start 41.69
400 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
401 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.38
402 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
403 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
404 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (10.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-826357 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-826357 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.416608982s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0111 07:25:52.425292 3124484 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0111 07:25:52.425419 3124484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-826357
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-826357: exit status 85 (172.816399ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-826357 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-826357 │ jenkins │ v1.37.0 │ 11 Jan 26 07:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:25:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:25:42.056441 3124490 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:42.056581 3124490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:42.056616 3124490 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:42.056621 3124490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:42.056898 3124490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	W0111 07:25:42.057040 3124490 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22402-3122619/.minikube/config/config.json: open /home/jenkins/minikube-integration/22402-3122619/.minikube/config/config.json: no such file or directory
	I0111 07:25:42.057503 3124490 out.go:368] Setting JSON to true
	I0111 07:25:42.058353 3124490 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":47293,"bootTime":1768069049,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 07:25:42.058467 3124490 start.go:143] virtualization:  
	I0111 07:25:42.064057 3124490 out.go:99] [download-only-826357] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0111 07:25:42.064320 3124490 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball: no such file or directory
	I0111 07:25:42.064443 3124490 notify.go:221] Checking for updates...
	I0111 07:25:42.068213 3124490 out.go:171] MINIKUBE_LOCATION=22402
	I0111 07:25:42.072044 3124490 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:25:42.076319 3124490 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 07:25:42.079830 3124490 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 07:25:42.083396 3124490 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0111 07:25:42.090147 3124490 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0111 07:25:42.090500 3124490 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:25:42.131163 3124490 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 07:25:42.131386 3124490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:25:42.206772 3124490 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-11 07:25:42.19107858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:25:42.206891 3124490 docker.go:319] overlay module found
	I0111 07:25:42.210303 3124490 out.go:99] Using the docker driver based on user configuration
	I0111 07:25:42.210376 3124490 start.go:309] selected driver: docker
	I0111 07:25:42.210385 3124490 start.go:928] validating driver "docker" against <nil>
	I0111 07:25:42.210542 3124490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:25:42.272372 3124490 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-11 07:25:42.262634148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:25:42.272539 3124490 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:25:42.272835 3124490 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0111 07:25:42.272991 3124490 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:25:42.276458 3124490 out.go:171] Using Docker driver with root privileges
	I0111 07:25:42.279643 3124490 cni.go:84] Creating CNI manager for ""
	I0111 07:25:42.279727 3124490 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0111 07:25:42.279740 3124490 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0111 07:25:42.279826 3124490 start.go:353] cluster config:
	{Name:download-only-826357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-826357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:25:42.283071 3124490 out.go:99] Starting "download-only-826357" primary control-plane node in "download-only-826357" cluster
	I0111 07:25:42.283108 3124490 cache.go:134] Beginning downloading kic base image for docker with containerd
	I0111 07:25:42.286226 3124490 out.go:99] Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:25:42.286309 3124490 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0111 07:25:42.286494 3124490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:25:42.304373 3124490 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 07:25:42.304581 3124490 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory
	I0111 07:25:42.304691 3124490 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 07:25:42.332669 3124490 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0111 07:25:42.332704 3124490 cache.go:65] Caching tarball of preloaded images
	I0111 07:25:42.332907 3124490 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0111 07:25:42.336386 3124490 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0111 07:25:42.336413 3124490 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0111 07:25:42.336421 3124490 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I0111 07:25:42.421954 3124490 preload.go:313] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I0111 07:25:42.422085 3124490 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I0111 07:25:46.298354 3124490 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I0111 07:25:46.298730 3124490 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/download-only-826357/config.json ...
	I0111 07:25:46.298765 3124490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/download-only-826357/config.json: {Name:mkc927b1e9a955f06338c02b767f23095b3032a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:25:46.298955 3124490 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I0111 07:25:46.299151 3124490 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-826357 host does not exist
	  To start a cluster, run: "minikube start -p download-only-826357"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-826357
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-009511 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-009511 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.984585895s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0111 07:25:57.199998 3124484 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime containerd
I0111 07:25:57.200039 3124484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-009511
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-009511: exit status 85 (83.913973ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-826357 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-826357 │ jenkins │ v1.37.0 │ 11 Jan 26 07:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 11 Jan 26 07:25 UTC │ 11 Jan 26 07:25 UTC │
	│ delete  │ -p download-only-826357                                                                                                                                                               │ download-only-826357 │ jenkins │ v1.37.0 │ 11 Jan 26 07:25 UTC │ 11 Jan 26 07:25 UTC │
	│ start   │ -o=json --download-only -p download-only-009511 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-009511 │ jenkins │ v1.37.0 │ 11 Jan 26 07:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:25:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:25:53.260403 3124693 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:25:53.260604 3124693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:53.260629 3124693 out.go:374] Setting ErrFile to fd 2...
	I0111 07:25:53.260647 3124693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:25:53.260911 3124693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:25:53.261376 3124693 out.go:368] Setting JSON to true
	I0111 07:25:53.262427 3124693 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":47305,"bootTime":1768069049,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 07:25:53.262528 3124693 start.go:143] virtualization:  
	I0111 07:25:53.307447 3124693 out.go:99] [download-only-009511] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 07:25:53.307753 3124693 notify.go:221] Checking for updates...
	I0111 07:25:53.339189 3124693 out.go:171] MINIKUBE_LOCATION=22402
	I0111 07:25:53.387519 3124693 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:25:53.418065 3124693 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 07:25:53.451579 3124693 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 07:25:53.483171 3124693 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0111 07:25:53.540616 3124693 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0111 07:25:53.540916 3124693 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:25:53.562760 3124693 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 07:25:53.562868 3124693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:25:53.624366 3124693 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-11 07:25:53.614840921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:25:53.624471 3124693 docker.go:319] overlay module found
	I0111 07:25:53.668085 3124693 out.go:99] Using the docker driver based on user configuration
	I0111 07:25:53.668136 3124693 start.go:309] selected driver: docker
	I0111 07:25:53.668144 3124693 start.go:928] validating driver "docker" against <nil>
	I0111 07:25:53.668274 3124693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:25:53.731084 3124693 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-11 07:25:53.72115042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:25:53.731237 3124693 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:25:53.731545 3124693 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0111 07:25:53.731705 3124693 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:25:53.763717 3124693 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-009511 host does not exist
	  To start a cluster, run: "minikube start -p download-only-009511"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-009511
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0111 07:25:58.373179 3124484 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-731808 --alsologtostderr --binary-mirror http://127.0.0.1:40327 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-731808" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-731808
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-709292
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-709292: exit status 85 (67.945621ms)

                                                
                                                
-- stdout --
	* Profile "addons-709292" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-709292"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-709292
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-709292: exit status 85 (76.399625ms)

                                                
                                                
-- stdout --
	* Profile "addons-709292" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-709292"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (121.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-709292 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-709292 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m1.217407849s)
--- PASS: TestAddons/Setup (121.22s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 52.712748ms
addons_test.go:878: volcano-admission stabilized in 53.177854ms
addons_test.go:870: volcano-scheduler stabilized in 53.578855ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-j22vj" [943f1e5a-28cb-4053-b2c6-77f8c2e0fee8] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003364867s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-kjhv7" [a46a5ec5-f013-4450-94fe-8eab7542d55b] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004043028s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-j7tmz" [543ab68a-4b1a-4930-a867-9d1c5eb52b8e] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003879802s
addons_test.go:905: (dbg) Run:  kubectl --context addons-709292 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-709292 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-709292 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [7be062c9-1759-4fd3-8952-ae3c1c31ff4a] Pending
helpers_test.go:353: "test-job-nginx-0" [7be062c9-1759-4fd3-8952-ae3c1c31ff4a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [7be062c9-1759-4fd3-8952-ae3c1c31ff4a] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003373178s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-709292 addons disable volcano --alsologtostderr -v=1: (12.000126031s)
--- PASS: TestAddons/serial/Volcano (39.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-709292 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-709292 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-709292 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-709292 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e2312ec3-877a-4872-8f80-7efa8872da7d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e2312ec3-877a-4872-8f80-7efa8872da7d] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003578593s
addons_test.go:696: (dbg) Run:  kubectl --context addons-709292 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-709292 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-709292 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-709292 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.774795ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-fhws9" [bbc300a7-6e05-4225-8a8b-4c9278f41c3f] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003860523s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-vsl58" [fa1c269b-ecf0-4c9c-b71a-434f0374e56e] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003706753s
addons_test.go:394: (dbg) Run:  kubectl --context addons-709292 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-709292 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-709292 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.287610916s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 ip
2026/01/11 07:29:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.33s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.583287ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-709292
addons_test.go:334: (dbg) Run:  kubectl --context addons-709292 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.81s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-709292 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-709292 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-709292 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [92d1fa66-a388-4d45-a70b-1eee4965cc43] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [92d1fa66-a388-4d45-a70b-1eee4965cc43] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003927078s
I0111 07:29:44.097589 3124484 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-709292 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-709292 addons disable ingress-dns --alsologtostderr -v=1: (1.83219895s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-709292 addons disable ingress --alsologtostderr -v=1: (7.923099309s)
--- PASS: TestAddons/parallel/Ingress (18.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-hggjw" [99485414-a8df-4502-bd9d-c6c8c236c073] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00396842s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-709292 addons disable inspektor-gadget --alsologtostderr -v=1: (5.784482143s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.166808ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-jdxzx" [0eaf5f28-622f-43e6-9ef7-db8319b0b8f7] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003186221s
addons_test.go:465: (dbg) Run:  kubectl --context addons-709292 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0111 07:29:16.084948 3124484 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0111 07:29:16.089530 3124484 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0111 07:29:16.089556 3124484 kapi.go:107] duration metric: took 7.991608ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.002643ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-709292 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-709292 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [175e7726-b248-420a-ab0c-5ef6d9701d60] Pending
helpers_test.go:353: "task-pv-pod" [175e7726-b248-420a-ab0c-5ef6d9701d60] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [175e7726-b248-420a-ab0c-5ef6d9701d60] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003466761s
addons_test.go:574: (dbg) Run:  kubectl --context addons-709292 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-709292 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-709292 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-709292 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-709292 delete pod task-pv-pod: (1.268440077s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-709292 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-709292 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-709292 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [ef37a2f1-6088-4928-935d-0ec05a9dd690] Pending
helpers_test.go:353: "task-pv-pod-restore" [ef37a2f1-6088-4928-935d-0ec05a9dd690] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [ef37a2f1-6088-4928-935d-0ec05a9dd690] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003786655s
addons_test.go:616: (dbg) Run:  kubectl --context addons-709292 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-709292 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-709292 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-709292 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.476139045s)
--- PASS: TestAddons/parallel/CSI (50.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-709292 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-709292 --alsologtostderr -v=1: (1.024734386s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-t74nz" [3e12e132-12d4-4de6-a668-347a37d9646b] Pending
helpers_test.go:353: "headlamp-6d8d595f-t74nz" [3e12e132-12d4-4de6-a668-347a37d9646b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-t74nz" [3e12e132-12d4-4de6-a668-347a37d9646b] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003575451s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-709292 addons disable headlamp --alsologtostderr -v=1: (5.848336785s)
--- PASS: TestAddons/parallel/Headlamp (17.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-9dhcl" [b96b6fa4-6136-4b86-a012-db8a8ccc1789] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003320442s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-709292 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-709292 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-709292 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [421bed41-8f22-4c38-b828-31ad388d7af1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [421bed41-8f22-4c38-b828-31ad388d7af1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [421bed41-8f22-4c38-b828-31ad388d7af1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002943853s
addons_test.go:969: (dbg) Run:  kubectl --context addons-709292 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 ssh "cat /opt/local-path-provisioner/pvc-26ac3201-390a-45dd-b98c-df9b663c3f50_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-709292 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-709292 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-709292 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.030529052s)
--- PASS: TestAddons/parallel/LocalPath (53.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-482wt" [1985bc52-30d9-4522-a4dc-42b1fc01af6a] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.014032539s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.90s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-9ngsm" [92865cea-0eb9-4c0f-8755-c270b5c6679a] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00364705s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-709292 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-709292 addons disable yakd --alsologtostderr -v=1: (5.980052861s)
--- PASS: TestAddons/parallel/Yakd (11.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-709292
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-709292: (12.325935721s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-709292
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-709292
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-709292
--- PASS: TestAddons/StoppedEnableDisable (12.60s)

                                                
                                    
x
+
TestCertOptions (30.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-554375 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-554375 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (27.624743735s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-554375 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-554375 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-554375 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-554375" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-554375
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-554375: (2.209843814s)
--- PASS: TestCertOptions (30.55s)

                                                
                                    
x
+
TestCertExpiration (215.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-192657 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-192657 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.748840921s)
E0111 08:09:37.240503 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-192657 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-192657 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.078048794s)
helpers_test.go:176: Cleaning up "cert-expiration-192657" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-192657
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-192657: (2.609540798s)
--- PASS: TestCertExpiration (215.44s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.32s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-902784 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-902784 --driver=docker  --container-runtime=containerd: (27.658146128s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-902784"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-902784": (1.069958658s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-6L6hugapFhaV/agent.3144431" SSH_AGENT_PID="3144432" DOCKER_HOST=ssh://docker@127.0.0.1:35543 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-6L6hugapFhaV/agent.3144431" SSH_AGENT_PID="3144432" DOCKER_HOST=ssh://docker@127.0.0.1:35543 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-6L6hugapFhaV/agent.3144431" SSH_AGENT_PID="3144432" DOCKER_HOST=ssh://docker@127.0.0.1:35543 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.377052035s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-6L6hugapFhaV/agent.3144431" SSH_AGENT_PID="3144432" DOCKER_HOST=ssh://docker@127.0.0.1:35543 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-902784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-902784
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-902784: (2.361832967s)
--- PASS: TestDockerEnvContainerd (43.32s)

                                                
                                    
x
+
TestErrorSpam/setup (26.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-635772 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-635772 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-635772 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-635772 --driver=docker  --container-runtime=containerd: (26.320969502s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (26.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 stop: (1.441600671s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-635772 --log_dir /tmp/nospam-635772 stop
--- PASS: TestErrorSpam/stop (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22402-3122619/.minikube/files/etc/test/nested/copy/3124484/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-214480 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0111 07:33:00.483521 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:00.489552 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:00.499863 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:00.520137 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:00.560422 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:00.640711 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:00.800991 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:01.121551 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:01.762508 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:03.043235 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:05.604912 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:10.725880 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:33:20.966764 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-214480 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (43.02084455s)
--- PASS: TestFunctional/serial/StartWithProxy (43.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0111 07:33:22.707055 3124484 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-214480 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-214480 --alsologtostderr -v=8: (7.153638145s)
functional_test.go:678: soft start took 7.157475979s for "functional-214480" cluster.
I0111 07:33:29.861380 3124484 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (7.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-214480 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 cache add registry.k8s.io/pause:3.1: (1.617811383s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 cache add registry.k8s.io/pause:3.3: (1.433174986s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 cache add registry.k8s.io/pause:latest: (1.213042534s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-214480 /tmp/TestFunctionalserialCacheCmdcacheadd_local1243963013/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cache add minikube-local-cache-test:functional-214480
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cache delete minikube-local-cache-test:functional-214480
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-214480
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.78407ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cache reload
functional_test.go:1178: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 cache reload: (1.023805275s)
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 kubectl -- --context functional-214480 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-214480 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-214480 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0111 07:33:41.447639 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-214480 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.746285484s)
functional_test.go:776: restart took 42.7463851s for "functional-214480" cluster.
I0111 07:34:21.063119 3124484 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (42.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-214480 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 logs
E0111 07:34:22.408728 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 logs: (1.49788021s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 logs --file /tmp/TestFunctionalserialLogsFileCmd2706522936/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 logs --file /tmp/TestFunctionalserialLogsFileCmd2706522936/001/logs.txt: (1.552282249s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-214480 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-214480
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-214480: exit status 115 (814.058781ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30234 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-214480 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 config get cpus: exit status 14 (58.430006ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 config get cpus: exit status 14 (59.930003ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-214480 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-214480 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 3161215: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-214480 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-214480 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (229.829477ms)

                                                
                                                
-- stdout --
	* [functional-214480] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:35:05.341493 3160617 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:35:05.345574 3160617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:35:05.345601 3160617 out.go:374] Setting ErrFile to fd 2...
	I0111 07:35:05.345609 3160617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:35:05.346088 3160617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:35:05.346634 3160617 out.go:368] Setting JSON to false
	I0111 07:35:05.347746 3160617 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":47857,"bootTime":1768069049,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 07:35:05.347899 3160617 start.go:143] virtualization:  
	I0111 07:35:05.351368 3160617 out.go:179] * [functional-214480] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 07:35:05.354309 3160617 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:35:05.354553 3160617 notify.go:221] Checking for updates...
	I0111 07:35:05.360399 3160617 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:35:05.363403 3160617 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 07:35:05.366462 3160617 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 07:35:05.371937 3160617 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 07:35:05.374840 3160617 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:35:05.378118 3160617 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:35:05.378750 3160617 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:35:05.414026 3160617 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 07:35:05.414204 3160617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:35:05.483648 3160617 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 07:35:05.47450835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:35:05.483760 3160617 docker.go:319] overlay module found
	I0111 07:35:05.486926 3160617 out.go:179] * Using the docker driver based on existing profile
	I0111 07:35:05.489833 3160617 start.go:309] selected driver: docker
	I0111 07:35:05.489855 3160617 start.go:928] validating driver "docker" against &{Name:functional-214480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-214480 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:35:05.489962 3160617 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:35:05.493584 3160617 out.go:203] 
	W0111 07:35:05.496523 3160617 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0111 07:35:05.499284 3160617 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-214480 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-214480 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-214480 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (202.635539ms)

                                                
                                                
-- stdout --
	* [functional-214480] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:34:58.159514 3159373 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:34:58.159637 3159373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:34:58.159649 3159373 out.go:374] Setting ErrFile to fd 2...
	I0111 07:34:58.159657 3159373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:34:58.160837 3159373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:34:58.161348 3159373 out.go:368] Setting JSON to false
	I0111 07:34:58.162328 3159373 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":47850,"bootTime":1768069049,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 07:34:58.162425 3159373 start.go:143] virtualization:  
	I0111 07:34:58.166003 3159373 out.go:179] * [functional-214480] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0111 07:34:58.169929 3159373 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:34:58.170086 3159373 notify.go:221] Checking for updates...
	I0111 07:34:58.176386 3159373 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:34:58.179372 3159373 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 07:34:58.182315 3159373 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 07:34:58.185203 3159373 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 07:34:58.187934 3159373 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:34:58.191289 3159373 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:34:58.191868 3159373 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:34:58.217693 3159373 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 07:34:58.217816 3159373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:34:58.282384 3159373 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 07:34:58.273317498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:34:58.282492 3159373 docker.go:319] overlay module found
	I0111 07:34:58.285577 3159373 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0111 07:34:58.288417 3159373 start.go:309] selected driver: docker
	I0111 07:34:58.288442 3159373 start.go:928] validating driver "docker" against &{Name:functional-214480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-214480 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:34:58.288540 3159373 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:34:58.292097 3159373 out.go:203] 
	W0111 07:34:58.294839 3159373 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0111 07:34:58.297692 3159373 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-214480 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-214480 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-l72j8" [b591305f-ee43-44ea-932a-b201e44f3cb6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-l72j8" [b591305f-ee43-44ea-932a-b201e44f3cb6] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003961313s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31965
functional_test.go:1685: http://192.168.49.2:31965: success! body:
Request served by hello-node-connect-5d95464fd4-l72j8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31965
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [a7e6e875-48c8-4ca3-b223-c59a6914f9f8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003999782s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-214480 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-214480 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-214480 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-214480 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1c4c9f42-3daa-432d-ac3b-b05ee1255892] Pending
helpers_test.go:353: "sp-pod" [1c4c9f42-3daa-432d-ac3b-b05ee1255892] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002888145s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-214480 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-214480 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-214480 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [9b6bebb3-9253-47c7-b1ed-43f9c14df2e9] Pending
helpers_test.go:353: "sp-pod" [9b6bebb3-9253-47c7-b1ed-43f9c14df2e9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003615264s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-214480 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh -n functional-214480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cp functional-214480:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3860805063/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh -n functional-214480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh -n functional-214480 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/3124484/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo cat /etc/test/nested/copy/3124484/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/3124484.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo cat /etc/ssl/certs/3124484.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/3124484.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo cat /usr/share/ca-certificates/3124484.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/31244842.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo cat /etc/ssl/certs/31244842.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/31244842.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo cat /usr/share/ca-certificates/31244842.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-214480 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 ssh "sudo systemctl is-active docker": exit status 1 (395.375286ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 ssh "sudo systemctl is-active crio": exit status 1 (371.570955ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 version -o=json --components: (1.35553145s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-214480 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-214480
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-214480 image ls --format short --alsologtostderr:
I0111 07:35:13.066632 3162270 out.go:360] Setting OutFile to fd 1 ...
I0111 07:35:13.066872 3162270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:13.066896 3162270 out.go:374] Setting ErrFile to fd 2...
I0111 07:35:13.066912 3162270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:13.067197 3162270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
I0111 07:35:13.067828 3162270 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:13.067988 3162270 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:13.068544 3162270 cli_runner.go:164] Run: docker container inspect functional-214480 --format={{.State.Status}}
I0111 07:35:13.087595 3162270 ssh_runner.go:195] Run: systemctl --version
I0111 07:35:13.087650 3162270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-214480
I0111 07:35:13.108839 3162270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35553 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/functional-214480/id_rsa Username:docker}
I0111 07:35:13.223439 3162270 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-214480 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-214480                     │ sha256:ce2d2c │ 2.17MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ sha256:de369f │ 22.4MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ sha256:ddc842 │ 15.4MB │
│ registry.k8s.io/pause                             │ 3.10.1                                │ sha256:d7b100 │ 268kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:c96ee3 │ 38.5MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ sha256:ba04bb │ 8.03MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ sha256:611c66 │ 25.7MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ sha256:271e49 │ 21.7MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ sha256:c3fcf2 │ 24.7MB │
│ registry.k8s.io/pause                             │ latest                                │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test       │ functional-214480                     │ sha256:7cc36d │ 991B   │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ sha256:b1a8c6 │ 40.6MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ sha256:88898f │ 20.7MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ sha256:3d1873 │ 249kB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-214480 image ls --format table --alsologtostderr:
I0111 07:35:15.761243 3162548 out.go:360] Setting OutFile to fd 1 ...
I0111 07:35:15.761422 3162548 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:15.761435 3162548 out.go:374] Setting ErrFile to fd 2...
I0111 07:35:15.761441 3162548 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:15.761714 3162548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
I0111 07:35:15.762343 3162548 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:15.762479 3162548 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:15.763001 3162548 cli_runner.go:164] Run: docker container inspect functional-214480 --format={{.State.Status}}
I0111 07:35:15.790528 3162548 ssh_runner.go:195] Run: systemctl --version
I0111 07:35:15.790607 3162548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-214480
I0111 07:35:15.817810 3162548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35553 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/functional-214480/id_rsa Username:docker}
I0111 07:35:15.934827 3162548 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-214480 image ls --format json --alsologtostderr:
[{"id":"sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"38502448"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"
id":"sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"22432091"},{"id":"sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"15405198"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559
"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":
"21749640"},{"id":"sha256:7cc36d3987e81f9214fbf5a299a8f5d2d32e09e1b47ab0d70582194f29b87101","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-214480"],"size":"991"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"25743422"},{"id":"sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"206722
43"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"24692295"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-214480 image ls --format json --alsologtostderr:
I0111 07:35:15.486282 3162513 out.go:360] Setting OutFile to fd 1 ...
I0111 07:35:15.486390 3162513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:15.486402 3162513 out.go:374] Setting ErrFile to fd 2...
I0111 07:35:15.486415 3162513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:15.486767 3162513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
I0111 07:35:15.487878 3162513 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:15.488066 3162513 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:15.488783 3162513 cli_runner.go:164] Run: docker container inspect functional-214480 --format={{.State.Status}}
I0111 07:35:15.515128 3162513 ssh_runner.go:195] Run: systemctl --version
I0111 07:35:15.515184 3162513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-214480
I0111 07:35:15.539128 3162513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35553 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/functional-214480/id_rsa Username:docker}
I0111 07:35:15.647992 3162513 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-214480 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "15405198"
- id: sha256:88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "20672243"
- id: sha256:de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "22432091"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "2173567"
- id: sha256:611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "25743422"
- id: sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "21749640"
- id: sha256:c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "24692295"
- id: sha256:c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "38502448"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:7cc36d3987e81f9214fbf5a299a8f5d2d32e09e1b47ab0d70582194f29b87101
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-214480
size: "991"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-214480 image ls --format yaml --alsologtostderr:
I0111 07:35:13.341719 3162311 out.go:360] Setting OutFile to fd 1 ...
I0111 07:35:13.342309 3162311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:13.342347 3162311 out.go:374] Setting ErrFile to fd 2...
I0111 07:35:13.342367 3162311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:13.342658 3162311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
I0111 07:35:13.343342 3162311 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:13.343513 3162311 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:13.344093 3162311 cli_runner.go:164] Run: docker container inspect functional-214480 --format={{.State.Status}}
I0111 07:35:13.369416 3162311 ssh_runner.go:195] Run: systemctl --version
I0111 07:35:13.369480 3162311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-214480
I0111 07:35:13.413958 3162311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35553 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/functional-214480/id_rsa Username:docker}
I0111 07:35:13.520103 3162311 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 ssh pgrep buildkitd: exit status 1 (390.688134ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image build -t localhost/my-image:functional-214480 testdata/build --alsologtostderr
2026/01/11 07:35:15 [DEBUG] GET http://127.0.0.1:42265/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 image build -t localhost/my-image:functional-214480 testdata/build --alsologtostderr: (3.66142288s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-214480 image build -t localhost/my-image:functional-214480 testdata/build --alsologtostderr:
I0111 07:35:14.028988 3162441 out.go:360] Setting OutFile to fd 1 ...
I0111 07:35:14.030476 3162441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:14.030497 3162441 out.go:374] Setting ErrFile to fd 2...
I0111 07:35:14.030503 3162441 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:35:14.030927 3162441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
I0111 07:35:14.033113 3162441 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:14.035381 3162441 config.go:182] Loaded profile config "functional-214480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
I0111 07:35:14.036034 3162441 cli_runner.go:164] Run: docker container inspect functional-214480 --format={{.State.Status}}
I0111 07:35:14.062792 3162441 ssh_runner.go:195] Run: systemctl --version
I0111 07:35:14.062842 3162441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-214480
I0111 07:35:14.102410 3162441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35553 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/functional-214480/id_rsa Username:docker}
I0111 07:35:14.215413 3162441 build_images.go:162] Building image from path: /tmp/build.1482205331.tar
I0111 07:35:14.215528 3162441 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0111 07:35:14.238090 3162441 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1482205331.tar
I0111 07:35:14.243118 3162441 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1482205331.tar: stat -c "%s %y" /var/lib/minikube/build/build.1482205331.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1482205331.tar': No such file or directory
I0111 07:35:14.243177 3162441 ssh_runner.go:362] scp /tmp/build.1482205331.tar --> /var/lib/minikube/build/build.1482205331.tar (3072 bytes)
I0111 07:35:14.265068 3162441 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1482205331
I0111 07:35:14.274694 3162441 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1482205331 -xf /var/lib/minikube/build/build.1482205331.tar
I0111 07:35:14.283772 3162441 containerd.go:402] Building image: /var/lib/minikube/build/build.1482205331
I0111 07:35:14.283875 3162441 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1482205331 --local dockerfile=/var/lib/minikube/build/build.1482205331 --output type=image,name=localhost/my-image:functional-214480
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:819732a270f4dbaf7e38271ebfc061e90e1db2edc2cea0cf2552279f9bd4b3e7
#8 exporting manifest sha256:819732a270f4dbaf7e38271ebfc061e90e1db2edc2cea0cf2552279f9bd4b3e7 0.0s done
#8 exporting config sha256:90baf48a8d6d59ec9acf43042dd63483df10f0ca2e36d783a311a82df7704ceb 0.0s done
#8 naming to localhost/my-image:functional-214480 done
#8 DONE 0.2s
I0111 07:35:17.585556 3162441 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1482205331 --local dockerfile=/var/lib/minikube/build/build.1482205331 --output type=image,name=localhost/my-image:functional-214480: (3.301648942s)
I0111 07:35:17.585622 3162441 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1482205331
I0111 07:35:17.594835 3162441 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1482205331.tar
I0111 07:35:17.604079 3162441 build_images.go:218] Built localhost/my-image:functional-214480 from /tmp/build.1482205331.tar
I0111 07:35:17.604118 3162441 build_images.go:134] succeeded building to: functional-214480
I0111 07:35:17.604125 3162441 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 --alsologtostderr: (1.275758859s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 --alsologtostderr: (1.079756154s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-214480 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 --alsologtostderr: (1.048497588s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "444.929796ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "82.158065ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "438.234488ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "81.944982ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-214480 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-214480 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-214480 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-214480 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 3158137: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-214480 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-214480 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [1a798eff-0580-483c-a761-0ae221d48cac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [1a798eff-0580-483c-a761-0ae221d48cac] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.006092168s
I0111 07:34:45.633407 3124484 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-214480 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.12.56 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-214480 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-214480 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-214480 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-sg24l" [01dff03f-556d-4f7f-abc4-24659291a4be] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-sg24l" [01dff03f-556d-4f7f-abc4-24659291a4be] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003244741s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdany-port2962564806/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768116898308169499" to /tmp/TestFunctionalparallelMountCmdany-port2962564806/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768116898308169499" to /tmp/TestFunctionalparallelMountCmdany-port2962564806/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768116898308169499" to /tmp/TestFunctionalparallelMountCmdany-port2962564806/001/test-1768116898308169499
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (358.286589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0111 07:34:58.668343 3124484 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 11 07:34 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 11 07:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 11 07:34 test-1768116898308169499
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh cat /mount-9p/test-1768116898308169499
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-214480 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [f4e89cc3-dfb5-4f56-b61e-72e3f1064702] Pending
helpers_test.go:353: "busybox-mount" [f4e89cc3-dfb5-4f56-b61e-72e3f1064702] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [f4e89cc3-dfb5-4f56-b61e-72e3f1064702] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [f4e89cc3-dfb5-4f56-b61e-72e3f1064702] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.0037841s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-214480 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdany-port2962564806/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 service list -o json
functional_test.go:1509: Took "519.755364ms" to run "out/minikube-linux-arm64 -p functional-214480 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:32308
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:32308
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdspecific-port2394421288/001:/mount-9p --alsologtostderr -v=1 --port 42517]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (543.358168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0111 07:35:07.498425 3124484 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdspecific-port2394421288/001:/mount-9p --alsologtostderr -v=1 --port 42517] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-214480 ssh "sudo umount -f /mount-9p": exit status 1 (358.702763ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-214480 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdspecific-port2394421288/001:/mount-9p --alsologtostderr -v=1 --port 42517] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388704153/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388704153/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388704153/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-214480 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-214480 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388704153/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388704153/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-214480 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1388704153/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-214480
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-214480
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-214480
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (184.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0111 07:35:44.328947 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:00.476964 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m3.794396495s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (184.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- rollout status deployment/busybox
E0111 07:38:28.169189 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 kubectl -- rollout status deployment/busybox: (4.39393776s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-6lfrq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-7h9rn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-xpc59 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-6lfrq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-7h9rn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-xpc59 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-6lfrq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-7h9rn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-xpc59 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-6lfrq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-6lfrq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-7h9rn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-7h9rn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-xpc59 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 kubectl -- exec busybox-769dd8b7dd-xpc59 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 node add --alsologtostderr -v 5: (30.016239763s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5: (1.094819754s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-487666 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.040867924s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 status --output json --alsologtostderr -v 5: (1.062908021s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp testdata/cp-test.txt ha-487666:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2666417137/001/cp-test_ha-487666.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666:/home/docker/cp-test.txt ha-487666-m02:/home/docker/cp-test_ha-487666_ha-487666-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m02 "sudo cat /home/docker/cp-test_ha-487666_ha-487666-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666:/home/docker/cp-test.txt ha-487666-m03:/home/docker/cp-test_ha-487666_ha-487666-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m03 "sudo cat /home/docker/cp-test_ha-487666_ha-487666-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666:/home/docker/cp-test.txt ha-487666-m04:/home/docker/cp-test_ha-487666_ha-487666-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m04 "sudo cat /home/docker/cp-test_ha-487666_ha-487666-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp testdata/cp-test.txt ha-487666-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2666417137/001/cp-test_ha-487666-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m02:/home/docker/cp-test.txt ha-487666:/home/docker/cp-test_ha-487666-m02_ha-487666.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666 "sudo cat /home/docker/cp-test_ha-487666-m02_ha-487666.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m02:/home/docker/cp-test.txt ha-487666-m03:/home/docker/cp-test_ha-487666-m02_ha-487666-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m03 "sudo cat /home/docker/cp-test_ha-487666-m02_ha-487666-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m02:/home/docker/cp-test.txt ha-487666-m04:/home/docker/cp-test_ha-487666-m02_ha-487666-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m04 "sudo cat /home/docker/cp-test_ha-487666-m02_ha-487666-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp testdata/cp-test.txt ha-487666-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2666417137/001/cp-test_ha-487666-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m03:/home/docker/cp-test.txt ha-487666:/home/docker/cp-test_ha-487666-m03_ha-487666.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666 "sudo cat /home/docker/cp-test_ha-487666-m03_ha-487666.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m03:/home/docker/cp-test.txt ha-487666-m02:/home/docker/cp-test_ha-487666-m03_ha-487666-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m02 "sudo cat /home/docker/cp-test_ha-487666-m03_ha-487666-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m03:/home/docker/cp-test.txt ha-487666-m04:/home/docker/cp-test_ha-487666-m03_ha-487666-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m04 "sudo cat /home/docker/cp-test_ha-487666-m03_ha-487666-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp testdata/cp-test.txt ha-487666-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2666417137/001/cp-test_ha-487666-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m04:/home/docker/cp-test.txt ha-487666:/home/docker/cp-test_ha-487666-m04_ha-487666.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666 "sudo cat /home/docker/cp-test_ha-487666-m04_ha-487666.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m04:/home/docker/cp-test.txt ha-487666-m02:/home/docker/cp-test_ha-487666-m04_ha-487666-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m02 "sudo cat /home/docker/cp-test_ha-487666-m04_ha-487666-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 cp ha-487666-m04:/home/docker/cp-test.txt ha-487666-m03:/home/docker/cp-test_ha-487666-m04_ha-487666-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 ssh -n ha-487666-m03 "sudo cat /home/docker/cp-test_ha-487666-m04_ha-487666-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 node stop m02 --alsologtostderr -v 5
E0111 07:39:37.240473 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:37.245759 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:37.255995 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:37.276396 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:37.316660 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:37.397030 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:37.557455 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:37.878037 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:38.519016 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 node stop m02 --alsologtostderr -v 5: (12.177042897s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5
E0111 07:39:39.799727 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5: exit status 7 (821.558725ms)

                                                
                                                
-- stdout --
	ha-487666
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-487666-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-487666-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-487666-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:39:39.091603 3178948 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:39:39.091774 3178948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:39:39.091807 3178948 out.go:374] Setting ErrFile to fd 2...
	I0111 07:39:39.091827 3178948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:39:39.092140 3178948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:39:39.092441 3178948 out.go:368] Setting JSON to false
	I0111 07:39:39.092511 3178948 mustload.go:66] Loading cluster: ha-487666
	I0111 07:39:39.092597 3178948 notify.go:221] Checking for updates...
	I0111 07:39:39.094012 3178948 config.go:182] Loaded profile config "ha-487666": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:39:39.094067 3178948 status.go:174] checking status of ha-487666 ...
	I0111 07:39:39.095579 3178948 cli_runner.go:164] Run: docker container inspect ha-487666 --format={{.State.Status}}
	I0111 07:39:39.119404 3178948 status.go:371] ha-487666 host status = "Running" (err=<nil>)
	I0111 07:39:39.119456 3178948 host.go:66] Checking if "ha-487666" exists ...
	I0111 07:39:39.119887 3178948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-487666
	I0111 07:39:39.152686 3178948 host.go:66] Checking if "ha-487666" exists ...
	I0111 07:39:39.153144 3178948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:39:39.153220 3178948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-487666
	I0111 07:39:39.173050 3178948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35558 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/ha-487666/id_rsa Username:docker}
	I0111 07:39:39.280556 3178948 ssh_runner.go:195] Run: systemctl --version
	I0111 07:39:39.290214 3178948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:39:39.303406 3178948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:39:39.374485 3178948 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2026-01-11 07:39:39.364697231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:39:39.375167 3178948 kubeconfig.go:125] found "ha-487666" server: "https://192.168.49.254:8443"
	I0111 07:39:39.375217 3178948 api_server.go:166] Checking apiserver status ...
	I0111 07:39:39.375279 3178948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:39:39.389762 3178948 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1338/cgroup
	I0111 07:39:39.398630 3178948 api_server.go:192] apiserver freezer: "4:freezer:/docker/d9d62e7f6cac322b8b29506000e5253b00bbbc488e62a213328825d3952f6f97/kubepods/burstable/podd6f17c871409f2c22b31686c5e26e987/752c0f6fc4a24d1c462d483a3e996f2fea3336e16d36bbde7f510861588c88c6"
	I0111 07:39:39.398703 3178948 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d9d62e7f6cac322b8b29506000e5253b00bbbc488e62a213328825d3952f6f97/kubepods/burstable/podd6f17c871409f2c22b31686c5e26e987/752c0f6fc4a24d1c462d483a3e996f2fea3336e16d36bbde7f510861588c88c6/freezer.state
	I0111 07:39:39.406541 3178948 api_server.go:214] freezer state: "THAWED"
	I0111 07:39:39.406574 3178948 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0111 07:39:39.416521 3178948 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0111 07:39:39.416554 3178948 status.go:463] ha-487666 apiserver status = Running (err=<nil>)
	I0111 07:39:39.416565 3178948 status.go:176] ha-487666 status: &{Name:ha-487666 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:39:39.416584 3178948 status.go:174] checking status of ha-487666-m02 ...
	I0111 07:39:39.416925 3178948 cli_runner.go:164] Run: docker container inspect ha-487666-m02 --format={{.State.Status}}
	I0111 07:39:39.434773 3178948 status.go:371] ha-487666-m02 host status = "Stopped" (err=<nil>)
	I0111 07:39:39.434801 3178948 status.go:384] host is not running, skipping remaining checks
	I0111 07:39:39.434809 3178948 status.go:176] ha-487666-m02 status: &{Name:ha-487666-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:39:39.434828 3178948 status.go:174] checking status of ha-487666-m03 ...
	I0111 07:39:39.435153 3178948 cli_runner.go:164] Run: docker container inspect ha-487666-m03 --format={{.State.Status}}
	I0111 07:39:39.455063 3178948 status.go:371] ha-487666-m03 host status = "Running" (err=<nil>)
	I0111 07:39:39.455089 3178948 host.go:66] Checking if "ha-487666-m03" exists ...
	I0111 07:39:39.455411 3178948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-487666-m03
	I0111 07:39:39.480589 3178948 host.go:66] Checking if "ha-487666-m03" exists ...
	I0111 07:39:39.481104 3178948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:39:39.481185 3178948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-487666-m03
	I0111 07:39:39.501426 3178948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35568 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/ha-487666-m03/id_rsa Username:docker}
	I0111 07:39:39.610610 3178948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:39:39.628048 3178948 kubeconfig.go:125] found "ha-487666" server: "https://192.168.49.254:8443"
	I0111 07:39:39.628128 3178948 api_server.go:166] Checking apiserver status ...
	I0111 07:39:39.628198 3178948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:39:39.643808 3178948 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	I0111 07:39:39.653399 3178948 api_server.go:192] apiserver freezer: "4:freezer:/docker/7d6c50c569a673aadaddae0b7a0d455c78f4f7d2361835923a6d774c20dbe6b0/kubepods/burstable/poda270e77bf74fba62505799594047c13e/921dacbb48d776def22e25d7fe772450297f56a36bc895e59194ea1d571c63cc"
	I0111 07:39:39.653535 3178948 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7d6c50c569a673aadaddae0b7a0d455c78f4f7d2361835923a6d774c20dbe6b0/kubepods/burstable/poda270e77bf74fba62505799594047c13e/921dacbb48d776def22e25d7fe772450297f56a36bc895e59194ea1d571c63cc/freezer.state
	I0111 07:39:39.661630 3178948 api_server.go:214] freezer state: "THAWED"
	I0111 07:39:39.661703 3178948 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0111 07:39:39.669998 3178948 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0111 07:39:39.670078 3178948 status.go:463] ha-487666-m03 apiserver status = Running (err=<nil>)
	I0111 07:39:39.670101 3178948 status.go:176] ha-487666-m03 status: &{Name:ha-487666-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:39:39.670142 3178948 status.go:174] checking status of ha-487666-m04 ...
	I0111 07:39:39.670487 3178948 cli_runner.go:164] Run: docker container inspect ha-487666-m04 --format={{.State.Status}}
	I0111 07:39:39.690853 3178948 status.go:371] ha-487666-m04 host status = "Running" (err=<nil>)
	I0111 07:39:39.690876 3178948 host.go:66] Checking if "ha-487666-m04" exists ...
	I0111 07:39:39.691203 3178948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-487666-m04
	I0111 07:39:39.709751 3178948 host.go:66] Checking if "ha-487666-m04" exists ...
	I0111 07:39:39.710078 3178948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:39:39.710121 3178948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-487666-m04
	I0111 07:39:39.730322 3178948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35573 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/ha-487666-m04/id_rsa Username:docker}
	I0111 07:39:39.842228 3178948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:39:39.858499 3178948 status.go:176] ha-487666-m04 status: &{Name:ha-487666-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 node start m02 --alsologtostderr -v 5
E0111 07:39:42.359975 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:39:47.480860 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 node start m02 --alsologtostderr -v 5: (11.808883205s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5: (1.235428576s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.08582747s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 stop --alsologtostderr -v 5
E0111 07:39:57.721765 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:40:18.201985 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 stop --alsologtostderr -v 5: (37.57665691s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 start --wait true --alsologtostderr -v 5
E0111 07:40:59.162731 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 start --wait true --alsologtostderr -v 5: (1m2.52346899s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 node delete m03 --alsologtostderr -v 5: (10.33134102s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 stop --alsologtostderr -v 5
E0111 07:42:21.083036 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 stop --alsologtostderr -v 5: (36.171082085s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5: exit status 7 (113.727876ms)

                                                
                                                
-- stdout --
	ha-487666
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-487666-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-487666-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:42:23.528416 3193568 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:42:23.528606 3193568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:42:23.528632 3193568 out.go:374] Setting ErrFile to fd 2...
	I0111 07:42:23.528650 3193568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:42:23.528957 3193568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:42:23.529197 3193568 out.go:368] Setting JSON to false
	I0111 07:42:23.529247 3193568 mustload.go:66] Loading cluster: ha-487666
	I0111 07:42:23.529388 3193568 notify.go:221] Checking for updates...
	I0111 07:42:23.529759 3193568 config.go:182] Loaded profile config "ha-487666": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:42:23.529793 3193568 status.go:174] checking status of ha-487666 ...
	I0111 07:42:23.530627 3193568 cli_runner.go:164] Run: docker container inspect ha-487666 --format={{.State.Status}}
	I0111 07:42:23.548652 3193568 status.go:371] ha-487666 host status = "Stopped" (err=<nil>)
	I0111 07:42:23.548673 3193568 status.go:384] host is not running, skipping remaining checks
	I0111 07:42:23.548680 3193568 status.go:176] ha-487666 status: &{Name:ha-487666 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:42:23.548708 3193568 status.go:174] checking status of ha-487666-m02 ...
	I0111 07:42:23.549038 3193568 cli_runner.go:164] Run: docker container inspect ha-487666-m02 --format={{.State.Status}}
	I0111 07:42:23.567925 3193568 status.go:371] ha-487666-m02 host status = "Stopped" (err=<nil>)
	I0111 07:42:23.567946 3193568 status.go:384] host is not running, skipping remaining checks
	I0111 07:42:23.567952 3193568 status.go:176] ha-487666-m02 status: &{Name:ha-487666-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:42:23.567971 3193568 status.go:174] checking status of ha-487666-m04 ...
	I0111 07:42:23.568381 3193568 cli_runner.go:164] Run: docker container inspect ha-487666-m04 --format={{.State.Status}}
	I0111 07:42:23.589131 3193568 status.go:371] ha-487666-m04 host status = "Stopped" (err=<nil>)
	I0111 07:42:23.589156 3193568 status.go:384] host is not running, skipping remaining checks
	I0111 07:42:23.589164 3193568 status.go:176] ha-487666-m04 status: &{Name:ha-487666-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E0111 07:43:00.477428 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.409873269s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (57.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 node add --control-plane --alsologtostderr -v 5: (56.637318188s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-487666 status --alsologtostderr -v 5: (1.0748446s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (57.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.104223526s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-451277 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E0111 07:44:37.236976 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:45:04.923786 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-451277 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (48.495050957s)
--- PASS: TestJSONOutput/start/Command (48.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-451277 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-451277 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-451277 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-451277 --output=json --user=testUser: (6.068790627s)
--- PASS: TestJSONOutput/stop/Command (6.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-658920 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-658920 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.733355ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1ecfcc27-0bff-4278-8d89-49004c46b451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-658920] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"21926cc2-8355-459b-b964-0a840313f2c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22402"}}
	{"specversion":"1.0","id":"a6003962-e0ae-4f21-af1f-ba67dc8e5fa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b2ca5db2-5f50-4c82-b5dc-03b02a7cf357","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig"}}
	{"specversion":"1.0","id":"8a5a9a20-138b-43fc-a892-332a6bff4b8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube"}}
	{"specversion":"1.0","id":"482eca1a-7fc3-41a4-9e5e-ab756feacbca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"227c31da-deae-4229-bfb1-61c741555124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2c7fa1ec-b0fe-48a5-aa95-8af172f22969","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-658920" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-658920
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-793810 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-793810 --network=: (32.688677262s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-793810" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-793810
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-793810: (2.209277896s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-952034 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-952034 --network=bridge: (26.435710311s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-952034" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-952034
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-952034: (2.046528061s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.51s)

                                                
                                    
x
+
TestKicExistingNetwork (30.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0111 07:46:36.276685 3124484 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0111 07:46:36.293200 3124484 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0111 07:46:36.293293 3124484 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0111 07:46:36.293311 3124484 cli_runner.go:164] Run: docker network inspect existing-network
W0111 07:46:36.308808 3124484 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0111 07:46:36.308840 3124484 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0111 07:46:36.308853 3124484 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0111 07:46:36.308957 3124484 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 07:46:36.326229 3124484 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6d6a2604bb10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:cd:63:f9:b2:f8} reservation:<nil>}
I0111 07:46:36.326546 3124484 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000010790}
I0111 07:46:36.327263 3124484 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0111 07:46:36.327347 3124484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0111 07:46:36.383067 3124484 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-088454 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-088454 --network=existing-network: (27.921997522s)
helpers_test.go:176: Cleaning up "existing-network-088454" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-088454
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-088454: (2.060087223s)
I0111 07:47:06.382425 3124484 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.12s)

                                                
                                    
x
+
TestKicCustomSubnet (29.75s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-946022 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-946022 --subnet=192.168.60.0/24: (27.460469147s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-946022 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-946022" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-946022
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-946022: (2.261855468s)
--- PASS: TestKicCustomSubnet (29.75s)

                                                
                                    
x
+
TestKicStaticIP (32.57s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-852943 --static-ip=192.168.200.200
E0111 07:48:00.482372 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-852943 --static-ip=192.168.200.200: (30.141029524s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-852943 ip
helpers_test.go:176: Cleaning up "static-ip-852943" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-852943
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-852943: (2.272892221s)
--- PASS: TestKicStaticIP (32.57s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (61.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-859794 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-859794 --driver=docker  --container-runtime=containerd: (26.802791637s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-862723 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-862723 --driver=docker  --container-runtime=containerd: (28.837844422s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-859794
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-862723
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-862723" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-862723
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-862723: (2.059933635s)
helpers_test.go:176: Cleaning up "first-859794" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-859794
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-859794: (2.472046722s)
--- PASS: TestMinikubeProfile (61.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-277244 --memory=3072 --mount-string /tmp/TestMountStartserial2510090402/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-277244 --memory=3072 --mount-string /tmp/TestMountStartserial2510090402/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.539829168s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-277244 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-279057 --memory=3072 --mount-string /tmp/TestMountStartserial2510090402/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0111 07:49:23.529715 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-279057 --memory=3072 --mount-string /tmp/TestMountStartserial2510090402/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.643524885s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-279057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-277244 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-277244 --alsologtostderr -v=5: (1.704149124s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-279057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-279057
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-279057: (1.290484899s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-279057
E0111 07:49:37.237876 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-279057: (6.696047753s)
--- PASS: TestMountStart/serial/RestartStopped (7.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-279057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-447637 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-447637 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m12.789194455s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-447637 -- rollout status deployment/busybox: (3.641690528s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-f2fkg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-h6vdt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-f2fkg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-h6vdt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-f2fkg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-h6vdt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-f2fkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-f2fkg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-h6vdt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-447637 -- exec busybox-769dd8b7dd-h6vdt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-447637 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-447637 -v=5 --alsologtostderr: (28.186838362s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-447637 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp testdata/cp-test.txt multinode-447637:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2912021675/001/cp-test_multinode-447637.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637:/home/docker/cp-test.txt multinode-447637-m02:/home/docker/cp-test_multinode-447637_multinode-447637-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m02 "sudo cat /home/docker/cp-test_multinode-447637_multinode-447637-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637:/home/docker/cp-test.txt multinode-447637-m03:/home/docker/cp-test_multinode-447637_multinode-447637-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m03 "sudo cat /home/docker/cp-test_multinode-447637_multinode-447637-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp testdata/cp-test.txt multinode-447637-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2912021675/001/cp-test_multinode-447637-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637-m02:/home/docker/cp-test.txt multinode-447637:/home/docker/cp-test_multinode-447637-m02_multinode-447637.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637 "sudo cat /home/docker/cp-test_multinode-447637-m02_multinode-447637.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637-m02:/home/docker/cp-test.txt multinode-447637-m03:/home/docker/cp-test_multinode-447637-m02_multinode-447637-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m03 "sudo cat /home/docker/cp-test_multinode-447637-m02_multinode-447637-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp testdata/cp-test.txt multinode-447637-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2912021675/001/cp-test_multinode-447637-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637-m03:/home/docker/cp-test.txt multinode-447637:/home/docker/cp-test_multinode-447637-m03_multinode-447637.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637 "sudo cat /home/docker/cp-test_multinode-447637-m03_multinode-447637.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 cp multinode-447637-m03:/home/docker/cp-test.txt multinode-447637-m02:/home/docker/cp-test_multinode-447637-m03_multinode-447637-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 ssh -n multinode-447637-m02 "sudo cat /home/docker/cp-test_multinode-447637-m03_multinode-447637-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-447637 node stop m03: (1.346679959s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-447637 status: exit status 7 (557.860129ms)

                                                
                                                
-- stdout --
	multinode-447637
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-447637-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-447637-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-447637 status --alsologtostderr: exit status 7 (554.47551ms)

                                                
                                                
-- stdout --
	multinode-447637
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-447637-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-447637-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:51:43.624738 3246789 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:51:43.624850 3246789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:51:43.624860 3246789 out.go:374] Setting ErrFile to fd 2...
	I0111 07:51:43.624865 3246789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:51:43.625122 3246789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:51:43.625313 3246789 out.go:368] Setting JSON to false
	I0111 07:51:43.625347 3246789 mustload.go:66] Loading cluster: multinode-447637
	I0111 07:51:43.625431 3246789 notify.go:221] Checking for updates...
	I0111 07:51:43.626612 3246789 config.go:182] Loaded profile config "multinode-447637": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:51:43.626639 3246789 status.go:174] checking status of multinode-447637 ...
	I0111 07:51:43.627467 3246789 cli_runner.go:164] Run: docker container inspect multinode-447637 --format={{.State.Status}}
	I0111 07:51:43.646564 3246789 status.go:371] multinode-447637 host status = "Running" (err=<nil>)
	I0111 07:51:43.646586 3246789 host.go:66] Checking if "multinode-447637" exists ...
	I0111 07:51:43.646886 3246789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-447637
	I0111 07:51:43.668938 3246789 host.go:66] Checking if "multinode-447637" exists ...
	I0111 07:51:43.669276 3246789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:51:43.669329 3246789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-447637
	I0111 07:51:43.691563 3246789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35678 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/multinode-447637/id_rsa Username:docker}
	I0111 07:51:43.803554 3246789 ssh_runner.go:195] Run: systemctl --version
	I0111 07:51:43.810309 3246789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:51:43.823969 3246789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:51:43.886708 3246789 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-11 07:51:43.876838368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:51:43.887271 3246789 kubeconfig.go:125] found "multinode-447637" server: "https://192.168.67.2:8443"
	I0111 07:51:43.887318 3246789 api_server.go:166] Checking apiserver status ...
	I0111 07:51:43.887378 3246789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:51:43.905480 3246789 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	I0111 07:51:43.915672 3246789 api_server.go:192] apiserver freezer: "4:freezer:/docker/5ef6c21e024edb51af740d82ba39b98592a3de25efec71b2d5dee54c294755a6/kubepods/burstable/podaa5c3326cf2a10a2131d42f628e79a23/0917bba7cb6766ff040de9a5c3e5cede5a8843a00988ec61a07fdad58d122c5f"
	I0111 07:51:43.915761 3246789 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5ef6c21e024edb51af740d82ba39b98592a3de25efec71b2d5dee54c294755a6/kubepods/burstable/podaa5c3326cf2a10a2131d42f628e79a23/0917bba7cb6766ff040de9a5c3e5cede5a8843a00988ec61a07fdad58d122c5f/freezer.state
	I0111 07:51:43.923746 3246789 api_server.go:214] freezer state: "THAWED"
	I0111 07:51:43.923776 3246789 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0111 07:51:43.932415 3246789 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0111 07:51:43.932446 3246789 status.go:463] multinode-447637 apiserver status = Running (err=<nil>)
	I0111 07:51:43.932458 3246789 status.go:176] multinode-447637 status: &{Name:multinode-447637 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:51:43.932475 3246789 status.go:174] checking status of multinode-447637-m02 ...
	I0111 07:51:43.932805 3246789 cli_runner.go:164] Run: docker container inspect multinode-447637-m02 --format={{.State.Status}}
	I0111 07:51:43.950688 3246789 status.go:371] multinode-447637-m02 host status = "Running" (err=<nil>)
	I0111 07:51:43.950714 3246789 host.go:66] Checking if "multinode-447637-m02" exists ...
	I0111 07:51:43.951020 3246789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-447637-m02
	I0111 07:51:43.968677 3246789 host.go:66] Checking if "multinode-447637-m02" exists ...
	I0111 07:51:43.969080 3246789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:51:43.969129 3246789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-447637-m02
	I0111 07:51:43.987963 3246789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35683 SSHKeyPath:/home/jenkins/minikube-integration/22402-3122619/.minikube/machines/multinode-447637-m02/id_rsa Username:docker}
	I0111 07:51:44.094010 3246789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:51:44.107379 3246789 status.go:176] multinode-447637-m02 status: &{Name:multinode-447637-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:51:44.107413 3246789 status.go:174] checking status of multinode-447637-m03 ...
	I0111 07:51:44.107743 3246789 cli_runner.go:164] Run: docker container inspect multinode-447637-m03 --format={{.State.Status}}
	I0111 07:51:44.125151 3246789 status.go:371] multinode-447637-m03 host status = "Stopped" (err=<nil>)
	I0111 07:51:44.125170 3246789 status.go:384] host is not running, skipping remaining checks
	I0111 07:51:44.125177 3246789 status.go:176] multinode-447637-m03 status: &{Name:multinode-447637-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-447637 node start m03 -v=5 --alsologtostderr: (7.445035827s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-447637
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-447637
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-447637: (25.223295435s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-447637 --wait=true -v=5 --alsologtostderr
E0111 07:53:00.476692 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-447637 --wait=true -v=5 --alsologtostderr: (51.869256322s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-447637
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-447637 node delete m03: (5.027695057s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-447637 stop: (23.90812517s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-447637 status: exit status 7 (100.660639ms)

                                                
                                                
-- stdout --
	multinode-447637
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-447637-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-447637 status --alsologtostderr: exit status 7 (90.530264ms)

                                                
                                                
-- stdout --
	multinode-447637
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-447637-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:53:39.434341 3255602 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:53:39.434547 3255602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:53:39.434577 3255602 out.go:374] Setting ErrFile to fd 2...
	I0111 07:53:39.434599 3255602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:53:39.434869 3255602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:53:39.435099 3255602 out.go:368] Setting JSON to false
	I0111 07:53:39.435156 3255602 mustload.go:66] Loading cluster: multinode-447637
	I0111 07:53:39.435268 3255602 notify.go:221] Checking for updates...
	I0111 07:53:39.435647 3255602 config.go:182] Loaded profile config "multinode-447637": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:53:39.435684 3255602 status.go:174] checking status of multinode-447637 ...
	I0111 07:53:39.436559 3255602 cli_runner.go:164] Run: docker container inspect multinode-447637 --format={{.State.Status}}
	I0111 07:53:39.455190 3255602 status.go:371] multinode-447637 host status = "Stopped" (err=<nil>)
	I0111 07:53:39.455210 3255602 status.go:384] host is not running, skipping remaining checks
	I0111 07:53:39.455217 3255602 status.go:176] multinode-447637 status: &{Name:multinode-447637 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:53:39.455252 3255602 status.go:174] checking status of multinode-447637-m02 ...
	I0111 07:53:39.455558 3255602 cli_runner.go:164] Run: docker container inspect multinode-447637-m02 --format={{.State.Status}}
	I0111 07:53:39.477852 3255602 status.go:371] multinode-447637-m02 host status = "Stopped" (err=<nil>)
	I0111 07:53:39.477876 3255602 status.go:384] host is not running, skipping remaining checks
	I0111 07:53:39.477884 3255602 status.go:176] multinode-447637-m02 status: &{Name:multinode-447637-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-447637 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-447637 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.556419496s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-447637 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-447637
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-447637-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-447637-m02 --driver=docker  --container-runtime=containerd: exit status 14 (98.410471ms)

                                                
                                                
-- stdout --
	* [multinode-447637-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-447637-m02' is duplicated with machine name 'multinode-447637-m02' in profile 'multinode-447637'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-447637-m03 --driver=docker  --container-runtime=containerd
E0111 07:54:37.236591 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-447637-m03 --driver=docker  --container-runtime=containerd: (28.651052849s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-447637
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-447637: exit status 80 (371.050147ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-447637 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-447637-m03 already exists in multinode-447637-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-447637-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-447637-m03: (2.130852532s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.31s)

                                                
                                    
x
+
TestScheduledStopUnix (102.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-303814 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-303814 --memory=3072 --driver=docker  --container-runtime=containerd: (25.591203879s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-303814 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:55:33.901299 3265106 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:55:33.901479 3265106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:55:33.901492 3265106 out.go:374] Setting ErrFile to fd 2...
	I0111 07:55:33.901498 3265106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:55:33.901792 3265106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:55:33.902125 3265106 out.go:368] Setting JSON to false
	I0111 07:55:33.902282 3265106 mustload.go:66] Loading cluster: scheduled-stop-303814
	I0111 07:55:33.902687 3265106 config.go:182] Loaded profile config "scheduled-stop-303814": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:55:33.902790 3265106 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/scheduled-stop-303814/config.json ...
	I0111 07:55:33.903023 3265106 mustload.go:66] Loading cluster: scheduled-stop-303814
	I0111 07:55:33.903185 3265106 config.go:182] Loaded profile config "scheduled-stop-303814": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-303814 -n scheduled-stop-303814
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-303814 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:55:34.375515 3265196 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:55:34.375699 3265196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:55:34.375727 3265196 out.go:374] Setting ErrFile to fd 2...
	I0111 07:55:34.375746 3265196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:55:34.376158 3265196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:55:34.377771 3265196 out.go:368] Setting JSON to false
	I0111 07:55:34.378517 3265196 daemonize_unix.go:73] killing process 3265122 as it is an old scheduled stop
	I0111 07:55:34.382655 3265196 mustload.go:66] Loading cluster: scheduled-stop-303814
	I0111 07:55:34.383072 3265196 config.go:182] Loaded profile config "scheduled-stop-303814": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:55:34.383153 3265196 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/scheduled-stop-303814/config.json ...
	I0111 07:55:34.383371 3265196 mustload.go:66] Loading cluster: scheduled-stop-303814
	I0111 07:55:34.383492 3265196 config.go:182] Loaded profile config "scheduled-stop-303814": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0111 07:55:34.387757 3124484 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/scheduled-stop-303814/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-303814 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-303814 -n scheduled-stop-303814
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-303814
E0111 07:56:00.284406 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-303814 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:56:00.644258 3265892 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:56:00.644512 3265892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:00.644545 3265892 out.go:374] Setting ErrFile to fd 2...
	I0111 07:56:00.644565 3265892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:00.644960 3265892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 07:56:00.645363 3265892 out.go:368] Setting JSON to false
	I0111 07:56:00.645544 3265892 mustload.go:66] Loading cluster: scheduled-stop-303814
	I0111 07:56:00.646191 3265892 config.go:182] Loaded profile config "scheduled-stop-303814": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 07:56:00.646350 3265892 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/scheduled-stop-303814/config.json ...
	I0111 07:56:00.646660 3265892 mustload.go:66] Loading cluster: scheduled-stop-303814
	I0111 07:56:00.646880 3265892 config.go:182] Loaded profile config "scheduled-stop-303814": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-303814
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-303814: exit status 7 (73.538421ms)

                                                
                                                
-- stdout --
	scheduled-stop-303814
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-303814 -n scheduled-stop-303814
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-303814 -n scheduled-stop-303814: exit status 7 (75.142786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-303814" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-303814
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-303814: (4.758981962s)
--- PASS: TestScheduledStopUnix (102.31s)

                                                
                                    
x
+
TestInsufficientStorage (12.5s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-234138 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-234138 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.950539611s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ebd3baf2-8d27-4527-a65f-ed57463554cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-234138] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b5b6d58-fe00-447a-b44a-e7e03781f818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22402"}}
	{"specversion":"1.0","id":"36740db7-7da6-4f7a-aa8c-9c4aedf6136a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"81e70169-750e-4439-bb91-af21559649b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig"}}
	{"specversion":"1.0","id":"53a3bd73-3bac-4bcf-a6a2-dda7677b245d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube"}}
	{"specversion":"1.0","id":"7ffeb681-74fe-4507-9412-57262edf2283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8865c322-0256-423b-a297-04654f715257","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a96ca116-1449-477f-913c-35bd19928c97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7ec3f165-e092-4c79-874b-4985fe3fa047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"92819147-4199-4b54-ba50-715c735847ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"13a09a4e-3932-4601-af65-130bcbfe6817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2695548c-ede6-43e4-886f-c74171f7dc6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-234138\" primary control-plane node in \"insufficient-storage-234138\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d6c03a3-3baf-4c85-836d-50a5659b9459","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1768032998-22402 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3254e0c1-c59e-4c8e-9d64-938652253e1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"95ca21e2-79f3-4267-8ad3-346fcb2ebbc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-234138 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-234138 --output=json --layout=cluster: exit status 7 (312.19067ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-234138","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-234138","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 07:57:00.815547 3267741 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-234138" does not appear in /home/jenkins/minikube-integration/22402-3122619/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-234138 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-234138 --output=json --layout=cluster: exit status 7 (299.782285ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-234138","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-234138","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 07:57:01.114861 3267807 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-234138" does not appear in /home/jenkins/minikube-integration/22402-3122619/kubeconfig
	E0111 07:57:01.125884 3267807 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/insufficient-storage-234138/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-234138" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-234138
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-234138: (1.938649285s)
--- PASS: TestInsufficientStorage (12.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (328.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1254512512 start -p running-upgrade-719084 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1254512512 start -p running-upgrade-719084 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (41.269787701s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-719084 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-719084 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m44.625287393s)
helpers_test.go:176: Cleaning up "running-upgrade-719084" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-719084
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-719084: (2.166631903s)
--- PASS: TestRunningBinaryUpgrade (328.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (85.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-760288 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-760288 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.757235304s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-760288 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-760288 --alsologtostderr: (1.463258876s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-760288 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-760288 status --format={{.Host}}: exit status 7 (85.932413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-760288 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-760288 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.653192139s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-760288 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-760288 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-760288 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (115.921758ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-760288] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-760288
	    minikube start -p kubernetes-upgrade-760288 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7602882 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-760288 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-760288 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-760288 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (13.600479694s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-760288" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-760288
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-760288: (2.321048779s)
--- PASS: TestKubernetesUpgrade (85.13s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3572983434 start -p missing-upgrade-402410 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3572983434 start -p missing-upgrade-402410 --memory=3072 --driver=docker  --container-runtime=containerd: (1m7.231657896s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-402410
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-402410: (1.326875181s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-402410
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-402410 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-402410 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m19.596974072s)
helpers_test.go:176: Cleaning up "missing-upgrade-402410" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-402410
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-402410: (3.033629103s)
--- PASS: TestMissingContainerUpgrade (152.19s)

                                                
                                    
x
+
TestPause/serial/Start (54.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-611159 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-611159 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (54.442292686s)
--- PASS: TestPause/serial/Start (54.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-611159 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0111 07:58:00.476832 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-611159 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.476086144s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.51s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-611159 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-611159 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-611159 --output=json --layout=cluster: exit status 2 (425.487802ms)

                                                
                                                
-- stdout --
	{"Name":"pause-611159","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-611159","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-611159 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.3s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-611159 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-611159 --alsologtostderr -v=5: (1.296621364s)
--- PASS: TestPause/serial/PauseAgain (1.30s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-611159 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-611159 --alsologtostderr -v=5: (3.276638655s)
--- PASS: TestPause/serial/DeletePaused (3.28s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-611159
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-611159: exit status 1 (18.822434ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-611159: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (313.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2974031455 start -p stopped-upgrade-035637 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E0111 07:59:37.236718 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2974031455 start -p stopped-upgrade-035637 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (44.54600629s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2974031455 -p stopped-upgrade-035637 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2974031455 -p stopped-upgrade-035637 stop: (1.625565643s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-035637 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0111 08:03:00.480658 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:04:37.236142 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-035637 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m27.809741681s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (313.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (5.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-035637
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-035637: (5.228579319s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (5.23s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (65.51s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-819303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-819303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (58.716678635s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-819303 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-819303
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-819303: (5.942734847s)
--- PASS: TestPreload/Start-NoPreload-PullImage (65.51s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (50.81s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-819303 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0111 08:06:03.530743 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-819303 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (50.556562781s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-819303 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (50.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858823 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-858823 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (98.935424ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-858823] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858823 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858823 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.218131486s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-858823 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858823 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858823 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (13.883317515s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-858823 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-858823 status -o json: exit status 2 (320.174296ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-858823","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-858823
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-858823: (2.016882027s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858823 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858823 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.325879358s)
--- PASS: TestNoKubernetes/serial/Start (7.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22402-3122619/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-858823 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-858823 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.81271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-858823
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-858823: (1.316718087s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858823 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858823 --driver=docker  --container-runtime=containerd: (6.644250478s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-858823 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-858823 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.902831ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-017834 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-017834 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (202.611609ms)

                                                
                                                
-- stdout --
	* [false-017834] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:08:03.053554 3323419 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:08:03.053703 3323419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:08:03.053714 3323419 out.go:374] Setting ErrFile to fd 2...
	I0111 08:08:03.053720 3323419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:08:03.054101 3323419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-3122619/.minikube/bin
	I0111 08:08:03.054925 3323419 out.go:368] Setting JSON to false
	I0111 08:08:03.055830 3323419 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":49834,"bootTime":1768069049,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0111 08:08:03.055903 3323419 start.go:143] virtualization:  
	I0111 08:08:03.059373 3323419 out.go:179] * [false-017834] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:08:03.063246 3323419 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:08:03.063304 3323419 notify.go:221] Checking for updates...
	I0111 08:08:03.066275 3323419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:08:03.069377 3323419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-3122619/kubeconfig
	I0111 08:08:03.072459 3323419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-3122619/.minikube
	I0111 08:08:03.075376 3323419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:08:03.078491 3323419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:08:03.081888 3323419 config.go:182] Loaded profile config "force-systemd-env-305397": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
	I0111 08:08:03.082053 3323419 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:08:03.114199 3323419 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:08:03.114408 3323419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:08:03.187387 3323419 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:08:03.177103219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:08:03.187499 3323419 docker.go:319] overlay module found
	I0111 08:08:03.190690 3323419 out.go:179] * Using the docker driver based on user configuration
	I0111 08:08:03.193687 3323419 start.go:309] selected driver: docker
	I0111 08:08:03.193713 3323419 start.go:928] validating driver "docker" against <nil>
	I0111 08:08:03.193730 3323419 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:08:03.197558 3323419 out.go:203] 
	W0111 08:08:03.200543 3323419 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0111 08:08:03.203604 3323419 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-017834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-017834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-017834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017834"

                                                
                                                
----------------------- debugLogs end: false-017834 [took: 3.284920566s] --------------------------------
helpers_test.go:176: Cleaning up "false-017834" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-017834
--- PASS: TestNetworkPlugins/group/false (3.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-334404 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E0111 08:14:37.235622 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-334404 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m0.471339352s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-334404 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9115f469-fd5d-40ee-be6a-845633548bd2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9115f469-fd5d-40ee-be6a-845633548bd2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003503986s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-334404 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-334404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-334404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.060096504s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-334404 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-334404 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-334404 --alsologtostderr -v=3: (12.107425709s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-334404 -n old-k8s-version-334404
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-334404 -n old-k8s-version-334404: exit status 7 (70.551353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-334404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-334404 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-334404 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (51.347361923s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-334404 -n old-k8s-version-334404
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-ln65t" [31a90c8c-36d6-414a-beb5-2dc13dbd99d2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004423755s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-ln65t" [31a90c8c-36d6-414a-beb5-2dc13dbd99d2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003178297s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-334404 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-334404 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-334404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-334404 -n old-k8s-version-334404
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-334404 -n old-k8s-version-334404: exit status 2 (327.079693ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-334404 -n old-k8s-version-334404
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-334404 -n old-k8s-version-334404: exit status 2 (332.808269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-334404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-334404 -n old-k8s-version-334404
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-334404 -n old-k8s-version-334404
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-563183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-563183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (51.089973803s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-563183 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0ddd1de0-adf3-408e-850c-6510c6ea6b4f] Pending
helpers_test.go:353: "busybox" [0ddd1de0-adf3-408e-850c-6510c6ea6b4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0ddd1de0-adf3-408e-850c-6510c6ea6b4f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003820565s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-563183 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-563183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-563183 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-563183 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-563183 --alsologtostderr -v=3: (12.080340159s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-563183 -n no-preload-563183
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-563183 -n no-preload-563183: exit status 7 (67.138257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-563183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-563183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E0111 08:18:00.476695 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-563183 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (53.109909925s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-563183 -n no-preload-563183
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fgv2m" [37a80f2f-b186-4f40-8ada-c3a4ff75be66] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003638476s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-fgv2m" [37a80f2f-b186-4f40-8ada-c3a4ff75be66] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003195275s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-563183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-563183 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-563183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-563183 -n no-preload-563183
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-563183 -n no-preload-563183: exit status 2 (340.187136ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-563183 -n no-preload-563183
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-563183 -n no-preload-563183: exit status 2 (323.975692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-563183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-563183 -n no-preload-563183
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-563183 -n no-preload-563183
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-239792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E0111 08:19:37.236126 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-239792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (46.874196774s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-239792 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5d0fd61a-848b-445d-9bab-b074ef8e5c22] Pending
helpers_test.go:353: "busybox" [5d0fd61a-848b-445d-9bab-b074ef8e5c22] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5d0fd61a-848b-445d-9bab-b074ef8e5c22] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003520728s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-239792 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-239792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-239792 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-239792 --alsologtostderr -v=3
E0111 08:20:04.887108 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:04.892463 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:04.902769 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:04.923129 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:04.963530 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:05.043953 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:05.204464 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:05.525177 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:06.165658 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:07.446700 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-239792 --alsologtostderr -v=3: (12.42111312s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-342401 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-342401 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (49.579808515s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239792 -n embed-certs-239792
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239792 -n embed-certs-239792: exit status 7 (137.026322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-239792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-239792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E0111 08:20:15.128380 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:25.369510 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:20:45.850154 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-239792 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (50.794423714s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239792 -n embed-certs-239792
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-342401 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [241a65be-5f71-4c5f-9f1a-2e70c6dd916e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [241a65be-5f71-4c5f-9f1a-2e70c6dd916e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.0037511s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-342401 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-6r7kc" [a5cfb181-94bc-49e6-8f83-e90f39d4181c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004203057s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-6r7kc" [a5cfb181-94bc-49e6-8f83-e90f39d4181c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003626722s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-239792 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-342401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-342401 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024406267s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-342401 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-342401 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-342401 --alsologtostderr -v=3: (14.191556881s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-239792 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-239792 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239792 -n embed-certs-239792
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239792 -n embed-certs-239792: exit status 2 (334.680491ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239792 -n embed-certs-239792
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239792 -n embed-certs-239792: exit status 2 (318.548921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-239792 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239792 -n embed-certs-239792
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239792 -n embed-certs-239792
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-896803 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-896803 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (35.877408309s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401: exit status 7 (105.87696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-342401 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-342401 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
E0111 08:21:26.810795 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-342401 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (55.194891994s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-896803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-896803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.138144373s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-896803 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-896803 --alsologtostderr -v=3: (1.361456974s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-896803 -n newest-cni-896803
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-896803 -n newest-cni-896803: exit status 7 (84.772128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-896803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-896803 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-896803 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0: (14.941920709s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-896803 -n newest-cni-896803
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-896803 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-896803 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-896803 -n newest-cni-896803
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-896803 -n newest-cni-896803: exit status 2 (357.612713ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-896803 -n newest-cni-896803
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-896803 -n newest-cni-896803: exit status 2 (356.19425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-896803 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-896803 -n newest-cni-896803
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-896803 -n newest-cni-896803
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.67s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-001704 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-001704 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (4.480560657s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-001704" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-001704
--- PASS: TestPreload/PreloadSrc/gcs (4.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-g6jl5" [f1dcd784-9c66-4cba-95de-03d87d664b27] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003914004s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.13s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-404201 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-404201 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd: (3.944083591s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-404201" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-404201
E0111 08:22:29.625950 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPreload/PreloadSrc/github (4.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-g6jl5" [f1dcd784-9c66-4cba-95de-03d87d664b27] Running
E0111 08:22:28.986909 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:28.992222 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:29.003038 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:29.023283 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:29.063573 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:29.143880 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:29.304434 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003469671s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-342401 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.53s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-040729 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-040729" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-040729
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0111 08:22:30.266944 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:31.547494 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (53.275775169s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-342401 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-342401 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401
E0111 08:22:34.107929 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401: exit status 2 (575.595423ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401: exit status 2 (500.617201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-342401 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-342401 -n default-k8s-diff-port-342401
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.53s)
E0111 08:27:23.032974 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:28.987159 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:27:56.672140 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0111 08:22:43.531389 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:48.731891 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:22:49.469149 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:23:00.476934 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:23:09.949852 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (57.102143059s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-017834 "pgrep -a kubelet"
I0111 08:23:23.782321 3124484 config.go:182] Loaded profile config "auto-017834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-017834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5j2lz" [36f2f34f-8c6b-4312-8ca5-c4d4e15e9f82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-5j2lz" [36f2f34f-8c6b-4312-8ca5-c4d4e15e9f82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004140451s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-017834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-dglf6" [ea0cbba0-b46d-49c3-9e7b-28522b87ed12] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003676272s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-017834 "pgrep -a kubelet"
I0111 08:23:44.462041 3124484 config.go:182] Loaded profile config "flannel-017834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-017834 replace --force -f testdata/netcat-deployment.yaml
I0111 08:23:44.829024 3124484 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-bm6xd" [1b229a50-51b5-485e-ba44-af2d7a83af3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-bm6xd" [1b229a50-51b5-485e-ba44-af2d7a83af3f] Running
E0111 08:23:50.909998 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003746929s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-017834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m4.238596902s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0111 08:24:37.235634 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/functional-214480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.563372332s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-wlh62" [846b9486-b7b8-4f3b-b8e0-7e52c87d1bb2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0111 08:25:04.887486 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "calico-node-wlh62" [846b9486-b7b8-4f3b-b8e0-7e52c87d1bb2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004511812s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-017834 "pgrep -a kubelet"
I0111 08:25:07.863512 3124484 config.go:182] Loaded profile config "calico-017834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-017834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-frfrb" [d40013c3-d63a-4952-bca9-491d4676e47b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-frfrb" [d40013c3-d63a-4952-bca9-491d4676e47b] Running
E0111 08:25:12.831026 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/no-preload-563183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003610431s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-017834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-017834 "pgrep -a kubelet"
I0111 08:25:21.995250 3124484 config.go:182] Loaded profile config "custom-flannel-017834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-017834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-cpwfz" [14c42157-764a-41b1-aefb-ef1c651939e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-cpwfz" [14c42157-764a-41b1-aefb-ef1c651939e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010470098s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-017834 exec deployment/netcat -- nslookup kubernetes.default
E0111 08:25:32.572164 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/old-k8s-version-334404/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (53.078861514s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0111 08:26:01.110359 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:01.115593 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:01.125829 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:01.146069 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:01.186314 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:01.266785 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:01.427448 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:01.748363 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:02.389192 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:03.670115 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:06.231228 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:11.351788 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:21.592475 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (43.571718914s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-jm866" [6682e4da-6944-467f-8546-5229dfc4a338] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003927816s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-017834 "pgrep -a kubelet"
I0111 08:26:41.340825 3124484 config.go:182] Loaded profile config "kindnet-017834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-017834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lc428" [685757d3-a922-4a63-8459-a096d95972e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0111 08:26:42.072676 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/default-k8s-diff-port-342401/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-lc428" [685757d3-a922-4a63-8459-a096d95972e8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004207699s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-017834 "pgrep -a kubelet"
I0111 08:26:42.795155 3124484 config.go:182] Loaded profile config "bridge-017834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-017834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-fmkgh" [59c142f4-d5c1-4f44-b483-aa1aff28a8ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-fmkgh" [59c142f4-d5c1-4f44-b483-aa1aff28a8ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003852425s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-017834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-017834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-017834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (41.686971061s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-017834 "pgrep -a kubelet"
I0111 08:27:58.461649 3124484 config.go:182] Loaded profile config "enable-default-cni-017834": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-017834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-6smfp" [63d2904a-eea8-47b7-9ab8-9f6b2f07158f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0111 08:28:00.476763 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-6smfp" [63d2904a-eea8-47b7-9ab8-9f6b2f07158f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003674357s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-017834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-017834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    

Test skip (30/337)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-934131 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-934131" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-934131
--- SKIP: TestDownloadOnlyKic (0.46s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-660673" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-660673
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E0111 08:08:00.477139 3124484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-3122619/.minikube/profiles/addons-709292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: kubenet-017834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-017834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-017834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017834"

                                                
                                                
----------------------- debugLogs end: kubenet-017834 [took: 3.577031469s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-017834" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-017834
--- SKIP: TestNetworkPlugins/group/kubenet (3.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-017834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-017834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-017834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-017834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017834"

                                                
                                                
----------------------- debugLogs end: cilium-017834 [took: 3.6478641s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-017834" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-017834
--- SKIP: TestNetworkPlugins/group/cilium (3.81s)

                                                
                                    
Copied to clipboard